Mobile phones are the ubiquitous platform used by billions of people globally, every day. However, two concerns signal a pause for reflection and change. First, while mobiles have rapidly become indispensable, the effect that constant device use has on our lives, our experiences, and the interactions we have with others, has caused growing discomfort [4, 7]. At the same time, there is a broad sense that mainstream mobile devices have fallen into a period of innovation limbo, with recent releases seemingly being distinguished only by ever narrowing feature gaps. As a recent Economist article bleakly reports, "More black rectangles made their debut" [2].
This course will challenge attendees to play a part in reinvigorating mobile interaction design. We celebrate the success that is apps, services, and the hugely popular ecology of mobile devices, but want to promote a return to radical innovation.
We have been fortunate enough to have collaborated with a broad range of industrial and academic researchers and practitioners over many years. More importantly, however, we have worked with a wide range of people who are not considered to be typical "future makers," and are also not usually considered when designing mobile user experiences. Typically these people - who have been called "emergent" users [1] - are often drawn from developing regions, with lower literacy, lower socioeconomic conditions, and other constraints. Our experience in working with these people has demonstrated how their unique and contrasting outlooks on both technology and the world and ways of seeing it are invaluable in generating radically new and exciting digital innovations.
Over the last two decades, creative, agile, lean and strategic design approaches have become increasingly prevalent in the development of interactive technologies, but tensions exist with longer established approaches such as human factors engineering and user-centered design. These tensions can be harnessed productively by first giving equal status in principle to creative, business and agile engineering practices, and then supporting this with flexible critical approaches and resources that can balance and integrate a range of multidisciplinary design practices.
The objective of this course is to provide newcomers to Human-Computer Interaction (HCI) with an introduction and overview of the field. Attendees often include practitioners without a formal education in HCI, and those teaching HCI for the first time. This course includes content on theory, cognition, design, evaluation, and user diversity.
Everything that we do as researchers is based on what we write. Especially for graduate students and young researchers, it is hard to turn a research project into a successful CHI publication. This struggle continues for postdocs and young professors trying to provide excellent reviews for the CHI community that pinpoint flaws and improvements in research papers. This second edition of the successful CHI paper writing course provides hands-on advice on how to write papers with clarity, substance, and style. It is structured into three 80-minute units with a focus on writing and reviewing respectively.
Hand-drawn sketches are an easy way for researchers to communicate and express ideas, as well as document, explore and describe concepts between researcher, user, or client. Sketches are fast, easy to create, and -- by varying their fidelity -- they can be used in all areas of HCI. The Applied Sketching in HCI course will explore and demonstrate themes around sketching in HCI with the aim of producing tangible outputs. Those attending will leave the course with the confidence to engage actively with sketching on a day-to-day basis. Participants will be encouraged to apply what they have learned to their own research.
Doing research with children requires special attention to recruitment, fair inclusion, and methods. Drawing on over fifteen years of experience of working with children in HCI contexts, this practical course will give attendees essential knowledge to do safe, effective and ethically sound research with children. The course covers the whole research journey, from recruitment through to publication. The course is delivered over two sessions.
The population of the developed world is aging. Most websites, apps, and digital devices are used by adults aged 50+ as well as by younger adults, so they should be designed accordingly. This course, based on the presenter's new book, presents age-related factors that affect older adults' ability to use digital technology, as well as design guidelines that reflect older adults' varied capabilities, usage patterns, and preferences.
In this course, we will take a detailed look at various breeds of spatial navigation interfaces that allow for locomotion in digital 3D environments such as games, virtual environments or even the exploration of abstract data sets. We will closely look into the basics of navigation, unravelling the psychophysics (including wayfinding) and actual locomotion (travel) aspects. The theoretical foundations form the basis for the practical skillset we will develop, by providing an in-depth discussion of navigation devices and techniques, and a step-by-step discussion of multiple real-world case studies. Doing so, we will cover the full range of navigation techniques from handheld to full-body, highly engaging and partly unconventional methods and tackle spatial navigation with hands-on-experience and tips for design and validation of novel interfaces. In particular, we will be looking at affordable setups and ways to "trick" out users to enable a realistic feeling of self-motion in the explored environments. As such, the course unites the theory and practice of spatial navigation, serving as entry point to understand and improve upon currently existing methods for the application domain at hand.
We will explore how deep learning approaches can be used for perceiving and interpreting the state and behavior of human beings in images, video, audio, and text data. The course will cover how convolutional, recurrent and generative neural networks can be used for applications of face recognition, eye tracking, cognitive load estimation, emotion recognition, natural language processing, voice-based interaction, and activity recognition. The course is open to beginners and is designed for those who are new to deep learning, but it can also benefit advanced researchers in the field looking for a practical overview of deep learning methods and their application.
There are many resources aimed at building one's research brand, this is not one of them Instead this course supports new and early career researchers in identifying and reinforcing three key elements of research identity. These elements are author name, research motivation and interest, and paper titles. We will begin by giving evidence-based advice on how to select an author name. We will then discuss research character, including topics, overarching interests, and specialties. Finally, we will discuss how to write paper titles that reflect content, identity and publication venue. Each theme will be supported by practical work.
"Attractive things work better" [3]. Users perceive aesthetic designs as easier to use than less-aesthetic designs, whether they are or not. Aesthetic designs are "[...] more readily accepted and used over time, and promote creative thinking and problem solving" [2] [1]. Therefore, an appealing visual design can be an important success criteria for websites, apps or other software products and should be of interest to anyone involved in creating them. Visual design is not something magic. It can be broken down into basic elements and principles. To understand them no particular design talent is required. In this course participants will be introduced to those elements, principles and a set of accompanying guidelines. They will put them to practice in a number of hands-on exercises with a tool of their choice (Powerpoint, Keynote, Sketch, Adobe Illustrator or Adobe Photoshop).
We can't predict the future, but by understanding the forces that shaped the present-the differing priorities and methods of computer science, human factors, psychology, design, information systems, information science, and other disciplines that influenced human-computer interaction - we can more effectively direct our efforts. We can learn where to look and how to interpret what we find.
This course covers how the disciplines that have contributed to human-computer interaction progressed from their formation to the present. Software went from passively reacting to human input to today's dynamic presence. HCI changed as it steadily realized goals outlined by pioneers. Although we are now in a new era, understanding how past events unfolded can prepare us for the surprises that lie ahead.
This course is a hands-on introduction to interactive electronics prototyping for people with a variety of backgrounds, including those with no prior experience in electronics. Familiarity with programming is helpful, but not required. Participants learn basic electronics, microcontroller programming and physical prototyping using the Arduino platform, then use digital and analog sensors, LED lights and motors to build, program and customize a small paper robot.
In this two-session course, attendees will learn how to conduct empirical research in human-computer interaction (HCI). This course delivers an A-to-Z tutorial on designing a user study and demonstrates how to write a successful CHI paper. It would benefit anyone interested in conducting a user study or writing a CHI paper. Only general HCI knowledge is required.
This course takes a practical approach to introduce the principles, methods and tools in task modeling and how this technique can support identification of automation opportunities, dangers and limitations. A technical interactive hands-on exercise of how to "do it right", such as: How to go from task analysis to task models? How to identify tasks that are good candidate for automation (through analysis and simulation)? How to identify reliability and usability dangers added by automation? How to design usable automation at system, application and interaction levels? And more...
UI design rules and guidelines are not simple recipes. Applying them effectively requires determining rule applicability and precedence and balancing trade-offs when rules compete. By understanding the underlying psychology, designers and evaluators enhance their ability to apply design rules. This two-part (160-minute) course explains that psychology.
This course will introduce participants to the three main stages of the development life cycle of gesture-based interactions: (ul) how to design a gesture user interface (UI) by carefully considering key aspects, such as gesture recognition techniques, variability in gesture articulation, properties of invariance (sampling, direction, position, scale, rotation), and good practices for gesture set design, (ii) how to implement a gesture UI with existing recognizers, software architecture, and libraries, and (iii) how to evaluate a gesture user interface with the help of various metrics of user performance. The course will also cover a discussion about the wide range of gestures, such as touch, finger, wrist, hand, arm, and whole-body gestures. Participants will be engaged to try out various tools on their own laptops and will leave the course with a set of useful resources for prototyping and evaluating gesture-based interactions in their own projects.
Data science requires metrics. But how does a researcher measure constructs such as delight, immersion, or intention to use? It's best to develop a suitable measure, rather than to just throw something together or use an inappropriate scale. This course presents seven simplified steps for developing a valid and reliable measure. The new scale can then be used to quantify and explain user behavior, make decisions and predictions, and build models. This half-day class is intended for anyone who desires a rapid but thorough overview of how to develop a measure, and it requires a modest understanding of statistics.
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines -- despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the CHI community (and the HCI field at large) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness. The goal of this course is to inform the CHI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
This course introduces participants to concepts of gamification and practices some gamification evaluation with a set of heuristics used to evaluated gameful applications and gameful design. We will introduce participants to some of the common gameful intervention strategies to add game design elements that can be used to motivate users and then train participants with our set of 28 gamification heuristics for rapid evaluation of gameful systems. The course is structured into three 80-minute units, which will give the participants enough time to learn how to gamify activities, apply new heuristics and improve their gameful designs. The course instructors, Gustavo Tondello and Lennart Nacke, have both developed the gameful design heuristics and taught a successful gamification course at CHI 2017 before.
This course introduces computational methods in human--computer interaction. Computational interaction methods use computational thinking -- abstraction, automation, and analysis -- to explain and enhance interaction. This course introduces optimization and probabilistic inference as principled methods. Lectures center on hands-on Python programming, interleaving theory and practical examples.
This course aims at introducing some key issues in contemporary ethnographic practice, emphasizing the role of the writing style and the epistemic position of the fieldworker in shaping a particular perspective on the observed phenomena. It outlines the theoretical assumptions that lie behind the traditional "realist position" of HCI ethnographies to propose methodological tools for conducting and writing reflexive ethnographies, valuing the role of the ethnographer and her subjective experiences.
Aggression manifests in many forms online (e.g. cyberbullying, flaming, doxxing, hate speech), yet studies of online aggression typically overlook variability between forms, motivations and communities. Through a series of four studies, my doctoral research explores how social norms develop and evolve in online communities, and how these may give rise to cyber-aggression.
Personal informatics technologies, such as consumer fitness tracking devices, have an enormous potential to transform the self-management of chronic conditions. However, it is unclear how people living with relapsing and progressive illnesses experience personal informatics tools in everyday life: what values and challenges are associated with their use? This research informs the design of future health tracking technologies through an ethnographic design study of the use and experience of personal informatics tools in multiple sclerosis (MS) self-management. Initial findings suggest that future health tracking technologies should acknowledge people's emotional wellbeing and foster flexible and mindful self-tracking, rather than focusing only on tracking primary disease indicators and optimising health behaviours.
The objective of this research is to examine the relationships that exist between immersion, adaptive resistance and physical exertion. It explores, through physiological gaming interventions whether decoupling human and machine haptics, when undertaking physical activity, in an immersive environment, may facilitate increased physical output.
Due to the enormous amount of information being carried over online systems today, no user can access all such information. Therefore, to help the users, all major online organizations deploy information retrieval (content recommendation, search or ranking) systems to find important information. Current information retrieval systems have to make certain design choices. For example, news recommendation systems need to decide on the quality of recommended news stories, how much emphasis to give to a story's long-term importance over its recency or freshness etc. Similarly, recommendation systems over user generated contents (e.g., in social media like Facebook and Twitter) need to take into account the content posted by heterogeneous user groups. However, such design choices can introduce unintended biases in the contents presented to the users. For example, the recommended contents may have poor quality or less news value, or the news discourse may get hijacked by hyper-active demographic groups. In this thesis, we want to systematically measure the effect of such design choices in the content recommendation systems, and build alternate recommendation systems that mitigate the biases in the recommendation output.
My research uses computational methods to understand deviant online communities to assess their health and well-being. My prior work studies the pro-eating disorder community, a specific deviant community that glorifies disordered eating behaviors. In my dissertation, I expand on this work in three ways: to help moderators manage deviant mental wellness content, to understand normative behaviors of support in communities, and ethical issues of predicting individualized mental wellness. Understanding these behaviors at-scale can help medical research develop better intervention strategies through social media as well as understanding bad behavior to make better online communities.
The goal of my research is to study how individuals perform self-experiments and to build behavior-powered systems that help them run such experiments. I have developed SleepCoacher, a sleep-tracking system that provides and evaluates the effect of actionable personalized recommendations for improving sleep. Going further, my aim is to expand beyond sleep and develop the first guided self-experimentation system, which educates users about health interventions and helps them plan and carry out their own experiments. My thesis aims to use self-experimentation to help people take better care of their well-being by uncovering the hidden causal relationships in their lives.
Drawing on the fields of HCI, ICTD, and Social Computing, my research explores how increasing internet access influences the lives of intended users and how we might leverage local information infrastructures to design more effective services for users in emerging markets. Through ethnographic research in Havana, my dissertation unpacks the ways individuals actively and creatively stitch together multiple information infrastructures to create their own versions of the "internet." Using Cuba as a case study, my work explores how future internet access initiatives might successfully map onto local information infrastructures to provide meaningful, sustainable engagement with the internet among under-connected communities in resource-constrained parts of the world.
Design of interactive technology provides opportunities as well as constraints in how a group of users can organize in a shared space. The core argument of interaction proxemics is to consider this in designing for collaboration. In my thesis, I focus on conceptualizing design of ubicomp technologies in this way. My goal with the concepts of proxemic configurations and transitions is to design for more flexibility in how users can collaborate and act socially through technology.
The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.
Cybersecurity advocates attempt to counter the tsunami of cyber attacks by promoting security best practices. However, little is known about the skills necessary for success and which advocacy techniques are the most effective. My research attempts to fill this gap by exploring the motivations, characteristics, and practices of cybersecurity advocates. The research informs educational recommendations to develop advocates and implications for how successful advocacy techniques can be incorporated into tools and interfaces that promote beneficial security behavior.
Our bodies are in a constant state of flux. Self-tracking technologies are increasingly used to understand, track and predict these fluxes and physiological processes. This paper outlines ongoing research that investigates the mediating qualities of self-tracking technologies. As physiological fluxes and processes are more commonly experienced by women, and have been historically used as a tool for subjugation, a feminist perspective and methodology is applied within this research. Methods including research-through-design and speculative and critical design are used to test the hypothesis that through speculating on the design of self-tracking technologies, valuable knowledge can be contributed to the fields of HCI and interaction design in relation to subjects such as the societal taboos and prejudices surrounding the notion of the changing body, privacy of biodata and how identity and sense of self is shaped through the act of self-tracking.
Smart homes are expected to reduce the time we spend on routine activities. Data becomes an enabler and asset, as smart devices collect and process it. Smart homes also have the potential to exaggerate bewilderment and resistance, feelings people express when their privacy is infringed. Because privacy is being influenced by socio-cultural factors and shaped by technology, this work argues for a thorough understanding of the home's socio-cultural context. We aim to provide a grounded-in-data, contextual smart home privacy model. The model will be applicable to product and policy design, and also inform future work in privacy research for ubiquitous computing settings.
Touchscreens enable intuitive interaction through a combination of input and output. Despite the advantages, touch input on smartphones still poses major challenges that impact the usability. Amongst others, this includes the fat-finger problem, reachability challenges and the lack of shortcuts. To address these challenges, I explore interaction methods for smartphones that extend the input space beyond single taps and swipes on the touchscreen. This includes interacting with different fingers and parts of the hand driven by machine learning and raw capacitive data of the full interacting hand. My contribution is further broadened by the development of smartphone prototypes with full on-device touch sensing capability and an understanding of the physiological limitations of the human hand to inform the design of fully hand-and-finger-aware interaction.
Impulse buying is a common behavior that can result in financial strain and feelings of regret and shame. Despite this and consumers' expressed desire to curb impulse buying, HCI and consumer behavior research has yet to explore technological interventions to support these consumers. This proposed dissertation draws on various theories of impulse buying to design and experimentally test interventions that either (a) postpone the purchase decision, (b) support budgeting and spending limits, or (c) prompt deeper reflection prior to purchase. Results should shed light on the efficacy of various mechanisms for exerting self-control in e-commerce environments.
Humans can recognise the intentions of one another through the understanding of another's mind - psychologists call this ability 'Theory of Mind'. Current computer interfaces, unfortunately, lack this ability and therefore understand little about human motives and intentions, let alone are able to predict them. Recent advances in eye tracking present an opportunity to narrow this gap as the eyes are a crucial component of theory of mind, enabling humans and intelligent UIs alike to predict intent as one would look before taking action. This dissertation explores the relationship of gaze representation and intent within the context of games to inform the development of gaze-aware UIs while creating novel gaze-enhanced player experiences along the way.
Engaging citizens is a major goal for governments, scientists and businesses, as their participation or lack of it can have a big impact on issues of common interest. This is true in particular for civic technologies such as participatory sensing (PS), which fully relies on citizens' active participation for monitoring tasks. Current research is largely focused on providing incentives to people to increase their participation. Yet, this approach has not proven to be fully effective. Hence, there is a current need to understand the underlying motivations of people to join, participate and abandon PS. I propose to study the dynamics of motivation from a value driven perspective. This research will develop novel civic technology design approaches aimed at enhancing engagement, as well as to environmental data representation, analysis and curation. I aim to advance the understanding of what motivates people to engage in PS in environmental monitoring contexts.
My research focuses on designing impactful support interventions for new mothers. I have completed two needs assessment studies to understand the support needs and support network structures of new mothers. Findings from my studies identified a gap in the expected and received support for new mothers. This gap can lead to adverse outcomes for both mother and infant. Prior research shows that compassion motivates people to act upon their empathy for another person's suffering through helpful behaviors to alleviate that suffering. The main objective of the next phase of my research is to design interventions that cultivate compassion towards new mothers in order to narrow the support gap identified. This research can inspire HCI designers to integrate compassion cultivation into support interventions.
Live video streaming is becoming an increasingly popular form of interaction in social media, with mobile devices being used for sharing of remote situation "on the go." We are interested in how such forms of interaction can be applied to mobile telepresence, i.e., a quick and effortless way of "teleporting" users to a remote location, where they can share each others' viewpoints and collaborate. We developed an application that achieves such effect by taking advantage of spatial data inferred from mobile devices' onboard sensors and embeds realtime video streams into panoramic mixed reality displays. To validate our solution we conducted a preliminary study and observed a statistically significant decrease in cognitive workload as well as increase in spatial and situational awareness among users in comparison with a regular videoconferencing application. Believing in the novelty of our approach, we plan to extend our application with environment mapping and empathic interface functionality.
My research examines new ways of approaching the contentious and uncertain knowledge politics surrounding natural disasters and climate change. Using three case studies, focused on different locations and type of hazards, I show how current information systems and technologies reproduce and reinforce long-standing discourses that have been widely shown to be problematic by social science research on disaster. Inspired by research in the area of feminist studies of technoscience, disaster STS and HCI research into critical and participatory design, I explore novel approaches to making sense of flood hazard, earthquake damage, and sea-level rise that seek provide new ways of engaging with the complexities and uncertainties that characterize the Anthropocene.
The ubiquity of mobile technologies has led communities to engage with environmental and political concerns through collecting, representing, and analysing data. Activist engagement through data collection aims to hold businesses and the state to account--positioning itself against big data and proprietary technologies that foreclose on agency. Design scholars have unique insight to bring as society addresses the ethical and social issues emerging from data-driven technologies and their development. In my dissertation I focus on the design of civic technologies for environmental accountability--map platforms, mapping techniques, and ambient sensor networks for documenting pollution. By adopting an action research framework to investigate these projects, my research examines how critically-informed, participatory practices intervene to build more sustainable and equitable socio-technical systems, contributing an understanding of how these practices can be applied in new contexts.
In this SIG, we propose a gathering of the HCI and Sustainability SIGCHI community to strategise future initiatives. This gathering is open to all attendees of CHI'18 and aims to foster an inclusive dialogue about the myriad challenges stemming from the intertwined issues of digital technology design and use with environmental and social justice. As a sustainability-focused SIGCHI community, it is our duty to remain committed to promoting environmental and social justice---a commitment that is particularly timely given the number of digitally mediated global events challenging those entwined notions of justice. In the midst of our rapidly changing and globalised context, this SIG offers space for CHI'18 attendees to strategise about the goals, directions, and initiatives of HCI and Sustainability community. We aim to develop draft outcomes as a seed for ongoing collaborative discussion and development of the HCI and Sustainability community for the next 2-5 years.
Feminist HCI has made a profound impact on perceptions of women's health, emancipation through design, as well as gender identity, inclusion, and diversity. However, there is a distinct lack of connection between these disparate but inherently connected research spaces. This SIG meeting aims to bring scholars together to discuss emerging and evolving issues of feminist research, and finding ways of using feminist theory and practice as a tool in future HCI research. Ultimately, the SIG will facilitate the engagement of a community of feminist HCI researchers, designers, and practitioners. It brings together those who may feel isolated in their respective research groups or universities to create a platform for feminist thought within SIGCHI and facilitate collaboration to proactively move towards the mainstreaming of feminism in HCI.
Digital maps represent an incredible HCI success-they have transformed the way people navigate in and access information about the world. While these platforms contain terabytes of data about road networks and points of interest (POIs), their information about physical accessibility is commensurately poor. Moreover, because of their highly graphical nature and reliance on gesture and mouse input, digital maps can be inaccessible to some user groups (e.g., those with visual or motor impairments). While there is active HCI work towards addressing both concerns, to our knowledge, there has been no direct effort to unite this research community. The goal of this SIG is threefold: first, to bring together and network scholars and practitioners who are broadly working in the area of accessible maps; second, to identify grand challenges and open problems; third, to help better establish accessible maps as a valuable topic with important HCI-related research problems.
This special interest group addresses the status quo of HCI research with regards to research practices of transparency and openness. Specifically, it discusses whether current practices are in line with the standards applied to other fields (e.g., psychology, economics, medicine). It seeks to identify current practices that are more progressive and worth communicating to other disciplines, while evaluating whether practices in other disciplines are likely to apply to HCI research constructively. Potential outcomes include: (1) a review of current HCI research policies, (2) a report on recommended practices, and (3) a replication project of key findings in HCI research.
In this document we explain the need and plans for a SIG Meeting at CHI on telepresence robots. We describe the organization of this SIG, our expected attendees, procedure and schedule of topics to be discussed, as well as our recruitment plan. Our goal is to provide a forum to discuss key issues surrounding the uses and usefulness of telepresence robots, including challenges and best practices.
Chatbots are emerging as an increasingly important area for the HCI community, as they provide a novel means for users to interact with service providers. Due to their conversational character, chatbots are potentially effective tools for engaging with customers, and are often developed with commercial interests at the core. However, chatbots also represent opportunities for positive social impact. Chatbots can make needed services more accessible, available, and affordable. They can strengthen users' autonomy, competence, and (possibly counter-intuitively) social relatedness. In this SIG we address the possible social benefits of chatbots and conversational user interfaces. We will bring together the existing, but disparate, community of researchers and practitioners within the CHI community and broader fields who have an interest in chatbots. We aim to discuss the potential for chatbots to move beyond their assumed role as channels for commercial service providers, explore how they may be used for social good, and how the HCI community may contribute to realize this.
This SIG will provide child-computer interaction researchers and practitioners an opportunity to discuss topics related to challenges brought about by the increasing ubiquity of computing in children's lives, including the collection, and use of "big data". Topics include control and ownership of children's data, the impact of personalization on inclusion, the proper role for the quantification of children's lives, and the educational needs of children growing up in a society with ubiquitous computing and big data.
Transparent statistics is a philosophy of statistical reporting whose purpose is scientific advancement rather than persuasion. At our CHI 2017 workshop, "Moving Transparent Statistics Forward", we identified that an important first step is to develop detailed guidelines for authors and reviewers in order to help them practice and promote transparent statistics. We propose a SIG to solicit feedback from the CHI community on a first working draft of "Transparent Statistics Guidelines" and engage potential contributors to push the transparent statistics movement forward.
Sketching is of great value as a process, input, output and tool in HCI, but can be confined to individual ideation or note-taking, as few researchers have the confidence to document events, studies and workshops under the public gaze. The recent surge in interest in this sometimes-overlooked skill has manifested itself in courses, workshops and live-scribing of high-profile events -- and a renewed enthusiasm for freehand sketching as a formal part of the research process at all levels. SketCHI aims to address both research interests and sketching practice in a combined approach to define, discuss and deliver theory and methods to a broad audience. As well as structuring high level discussions and collating information and resources, this SIG will allow attendees to practice and explore observational sketching on location around the conference, with feedback and encouragement from industry professionals. Finally, attendees will be encouraged to collaborate and form a research community around sketching in HCI.
Participants who attend CHI automatically become a member of the Special Interest Group in Human-Computer Interaction (SIGCHI). Being a member of SIGCHI entitles its members to a variety of benefits, such as student grants, conference discounts, research magazines, etc. However, how do SIGCHI members experience their membership? What kind of benefits do members want and need? In a recent survey conducted by the SIGCHI Communications and Membership team, we found that current members seek the opportunity to communicate (present) their research and that they want to learn from each other. This SIG is a great opportunity to discuss with the SIGCHI Communications and Membership team on how to serve its members better. It is open to everyone, from CHI 2018 newcomers to long-standing members of SIGCHI.
The Games-and-Play community has thrived at ACM SIGCHI with a consistent increase in games- and play-related submissions across research papers, workshops, posters, demos, and competitions. The community has attracted a significant number of academic researchers, students, and practitioners to CHI conferences in recent years. CHI 2018 is being held in Montréal, a major game development hub. Montréal is not only a home for major game studios but also more than 100 smaller game studios. In line with the "Engage With CHI" spirit of CHI 2018, this SIG aims to engage the Games and Play community in a discussion about the directions that we can take to advance towards demographics that will benefit from HCI games research but are currently underrepresented: small, independent developers, non-profit organizations, and academics that create mobile games, games for health, games for change, and/or educational games.
Public participation in the decision-making processes that shape the urban environments we inhabit is an imperative aspect of a democratic society. Recent developments in the fields of Information Visualization, Gamification and Immersive Technologies (AR/VR/MR) offer novel opportunities for civic engagement in the planning process that remain largely unexplored. This SIG aims to identify ways in which these technologies can be used to tackle the public participation challenges identified by the European Commission, the UN Habitat and the World Bank and experienced by citizens across the world. The overarching goal of this SIG is to define methods and processes where technology can facilitate public participation in the planning process for the inclusive and democratic development of our cities. The overarching goal of this SIG is to bring together an interdisciplinary group of practitioners, academics and policy makers from the CHI communities (Design, User Experience, HCI for Development (HCI4D), Sustainability and Games & Entertainment) and beyond, to discuss innovative ways to increase the transparency, accountability and democratic legitimacy of this innately political process.
Evaluating research artefacts is an important step to showcase the validity of a chosen approach. The CHI community has developed and agreed upon a large variety of evaluation methods for HCI research; however, sometimes those methods are not applicable or not sufficient. This is especially the case when the contribution lies within the context of the application area, such as for research in sustainable HCI, HCI for development, or design fiction and futures studies. In this SIG, we invite the CHI community to share their insights from projects that encountered problems in evaluating research and aim to discuss solutions for this difficult topic. We invite researchers from all areas of HCI research who are interested to engage in a debate of issues in the process of validating research artefacts.
Debates regarding the nature and role of HCI research and practice have intensified in recent years, given the ever increasingly intertwined relations between humans and technologies. The framework of Human-Engaged Computing (HEC) was proposed and developed over a series of scholarly workshops to complement mainstream HCI models by leveraging synergy between humans and computers with its key notion of "engagement". Previous workshop meetings found "engagement" to be a constructive and extendable notion through which to investigate synergized human-computer relationships, but many aspects concerning the core concept remain underexplored. This SIG aims to tackle the notion of engagement considered through discussions of four thematic threads. It will bring together HCI practitioners and researchers from different disciplines including Humanities, Design, Positive Psychology, Communication and Media Studies, Neuroscience, Philosophy and Eastern Studies, to share and discuss relevant knowledge and insights and identify new research opportunities and future directions.
In both developing and developed countries, policies implemented by governments are affecting the health of already marginalized communities. Within the HCI community there are examples of implicit and explicit forms of health activism as well as sub-communities adopting an activist approach to address issues of social justice that ultimately influence the social determinants of health. This SIG aims to bring together these groups of HCI scholars to outline an agenda for health activism and research-identifying and highlighting characteristics of this burgeoning domain.
The organisers of this SIG wish to disrupt CHI's frenetic schedule by offering attendees time and space for collective silence and shared group reflection. Our aim in doing so is to put into action some of the theories and methods already being used in and by the HCI community-e.g. mindfulness [3], reflective design [10], and slow design [8]-and to acknowledge that our well-being is of the utmost importance, including throughout conferences. During this SIG, we will offer attendees two phases of activities: one centred around group silence, and another focused on openly sharing reflections about our experiences at CHI in small groups. Between these activities, attendees will have opportunities to chat with each other. We hope this will foster personal and collective resilience, and inspire creativity.
The contexts of prison and incarceration are under-explored from a HCI and Design perspective and information about actual everyday life in prison is scarcely available. Whilst some prisons have begun incorporating technology into prison life, this is still in its infancy in terms of prisoner access. This Special Interest Group will provide HCI researchers, Design researchers and practitioners an opportunity to discuss the potentials and challenges in the prison context. Through participatory exercises we will discuss the particular issues surrounding HCI and Design in prison contexts and for incarcerated individuals. Participants will have opportunities to stay connected after the SIG and to develop collaborations for future research.
Increasing the number of women and other underrepresented groups is a long-standing problem in high tech. Huge investments are being made to grow enrollments in college STEM programs, change hiring practices, and tackle workplace issues. But representations haven't changed. Since 2005 the percentage of women remains stagnant at 22%. Moreover, 50% of the women leave the field. To address workplace issues and improve retention, companies use interventions like mentorship, bias training, and flexwork. We think there's an opportunity to use technology to create interventions that are more effective than traditional corporate programs. This SIG harnesses our community's creativity and challenges us to think through ways to use technology to support diversity and retention. To provide empirical data for design, the organizers will share their findings from a worldwide survey of women and men that reveal the underlying reasons for low retention. Participants will then brainstorm and sketch technology solutions.
This SIG focuses on new definitions of Natural User Interface (NUI). With the adoption of wearable devices, VR & AR displays, affective computing, and voice user interface, we think it's necessary to review our understanding and definition of NUI. This SIG aims to expand discussion and development related to NUI in two areas: first, what experience should NUIs achieve today? How can we build UIs to leverage other senses besides vision and hearing such as tactility, olfaction and gustation? Second, how can we detect, capture and compute people's behavioral signals in a natural way and provide output accordingly? What are the current available technologies to achieve NUIs, and what new technologies should be invented to achieve it?
For this demonstration, Dr. Hill H. Kobayashi will present a collection of HCI interventions designed to monitor radiation levels in the exclusion zone around the Fukushima Daiichi Nuclear Power Plant in Japan. CHI2018 conference attendees will view design artifacts, listen to audio and read posters documenting the latest studies conducted by the Kobayashi Lab at the University of Tokyo. Our goal is to promote dialogue around a relatively new discipline called Human Computer Biosphere Interaction (HCBI), and explore the boundaries of this discipline with the CHI community.
We present two approaches to sensing objects on a surface, using passive Radio Frequency Identification (RFID), and explain the application of this technology to toy prototypes, Code Maker and Story Maker. In Code Maker, we created a system with an RFID antenna array on a robotic fire truck. The array senses RFID-embedded tiles as they are placed on the truck. Each tile corresponds to a command that is executed by the truck, directing to move, light up, and make sounds. In Story Maker, RFID antennas are embedded in toys that are placed on a mat, underlain with a two-dimensional array of RFID tags. The toys report their position on the mat, which triggers a scene in the story that corresponds to the configuration of toys. We describe the advantages of these RFID implementations as an alternative to other approaches, such as resistive sensing and hardware connectors.
We demonstrate grafter, a software system that allows users to remix 3D printable machines based on models made by others. Users can extract mechanisms from existing models and connect them to one another, forming new machines. The reason this works is that it does not attempt to recombine parts, as is common in 3D printed remixes, but recombine mechanisms instead. The original mechanisms have been working and printed before, grafter keeps the mechanisms intact and therefore sidesteps the need for time consuming tweaking and test printing.
This demo presents the concept of Interactive Digital Signage with Haptics, where users can interact with public digital screens with their bare hands, utilizing tracking technology and ultrasonic mid-air haptic feedback. Using these three main components: a digital screen, a tracking device, and Ultrahaptics technology for tactile feedback in mid-air, users are offered a multi-sensory experience that can potentially dramatically improve the advertising experience, increasing brand engagement, dwell time and brand recall. To this end, we present an example of a movie poster that transforms into an interactive mini game.
We present a mobile phone-based application for loan management in a resource-constrained setting. In this setting, a social enterprise manages auto-rickshaw loans for drivers, taking charge of collections. The design was informed by an ethnographic study which revealed how loan management for this financially vulnerable population is a daily struggle, and loan payment is a collaborative achievement between collectors and drivers. However, drivers and collectors have limited resources to-hand for loan management. To address this, we designed Prayana, an intermediated financial management app.
We demonstrate Printed Paper Actuator as a low cost, reversible and electrical actuation and sensing method. This is a novel but easily accessible enabling technology that expands upon the library of actuation-sensing materials in HCI. By integrating three physical phenomena, including the bilayer bending actuation, the shape memory effect of the thermoplastic and the current-driven joule heating via conductive printing filament, we developed the actuator by simply printing a single layer conductive Polylactide (PLA) on a piece of copy paper via a desktop fused deposition modeling (FDM) 3D printer
We present Haptic Links, electro-mechanically actuated physical connections capable of rendering variable stiffness between two commodity handheld virtual reality (VR) controllers. When attached, Haptic Links can dynamically alter the forces perceived between the user's hands to support the haptic rendering of a variety of two-handed objects and interactions. They can rigidly lock controllers in an arbitrary configuration, constrain specific degrees of freedom or directions of motion, and dynamically set stiffness along a continuous range. We demonstrate and compare three prototype Haptic Links: Chain, Layer-Hinge, and Ratchet-Hinge. We then describe interaction techniques and scenarios leveraging the capabilities of each. Our user evaluation results confirm that users can perceive many two-handed objects or interactions as more realistic with Haptic Links than with typical unlinked VR controllers. For details, see the full paper in the main conference proceedings.
In this paper, we introduce a transcutaneous language communication (TLC) system that transmits a tactile representation of spoken or written language to the arm. Users receive messages without looking at their smart devices, and feel them through their skin. We will demonstrate an application that helps users get acquainted with our TLC system, learn building blocks for a small vocabulary and generalize them to new words - all within 3-5 minutes on the demo. Finally, the applications of the TLC system for normal and impaired individuals are discussed.
Mathematical experiences are intrinsic to our everyday lives, yet mathematics education is mostly confined to textbooks. Seymour Papert used the term 'Mathland' to propose a world where one would learn mathematics as naturally as one learns French while growing up in France. We demonstrate a Mixed Reality application that augments the physical world with interactive mathematical concepts to enable constructionist mathematical learning in the real world. Using Mathland, people can collaboratively explore, experience and experiment with mathematical phenomena in playful, applied and exploratory ways. We implemented Mathland using the Microsoft Hololens and two custom controllers to afford complete immersion through tangible interactions, embodiment and situated learning.
We present Haptic Revolver, a handheld virtual reality controller that renders fingertip haptics when interacting with virtual surfaces. Haptic Revolver's core haptic element is an actuated wheel that raises and lowers underneath the finger to render contact with a virtual surface. As the user's finger moves along the surface of an object, the controller spins the wheel to render shear forces and motion under the fingertip. The wheel is interchangeable and can contain physical textures, shapes, edges, or active elements to provide different sensations to the user. Because the controller is spatially tracked, these physical features can be spatially registered with the geometry of the virtual environment and rendered on-demand.
We present Thor's Hammer - an ungrounded force feedback device. Thor's Hammer consists of a cube-shaped structure and a handle. To generate 3-DOF force feedback, six motors and propellers installed on the cube project air in six directions that are normal to each face of the cube-shaped structure. Tracked in 6-DOF by an optical tracking system, Thor's Hammer can also create strong and continuous force feedback in any direction regardless of the device's orientation. We present four virtual reality applications to demonstrate beneficial uses of the force feedback provided by Thor's Hammer.
TaskCams are simple digital cameras designed for studies of users and their contexts. Researchers and practitioners can build their own TaskCams using instructions and videos from www.probetools.net, off-the-shelf parts, and a custom Arduino shield made available from the site. There is a myriad of options for customisation and modification, allowing researchers to adopt and adapt them to their needs. We view the open-source distribution of TaskCams as a novel approach to disseminating a research methodology.
Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data. However, reasoning dynamically about the results of a dimensionality reduction is difficult. Dimensionality-reduction algorithms use complex optimizations to reduce the number of dimensions of a dataset, but these new dimensions often lack a clear relation to the initial data dimensions, thus making them difficult to interpret. Here we propose a visual interaction framework to improve dimensionality-reduction based exploratory data analysis. We introduce two interaction techniques, forward projection and backward projection, for dynamically reasoning about dimensionally reduced data. We also contribute two visualization techniques, prolines and feasibility maps, to facilitate the effective use of the proposed interactions. We apply our framework to PCA and autoencoder-based dimensionality reductions. Through data-exploration examples, we demonstrate how our visual interactions can improve the use of dimensionality reduction in exploratory data analysis.
In this paper, we propose a new system called xSlate, a stiffness controlled surface for shape changing interfaces. It is enabled by a deformable frame structure that consists of linear actuators, and an elastic skin surface that can configure its stiffness by pneumatic jamming. We describe the implementation methods of xSlate and how it can be used for future applications.
This demonstration presents Season Traveller, a multisensory virtual reality (VR) narration of a journey through four seasons within a mystical realm. By adding olfactory and haptic (thermal and wind) stimuli, we extend traditional audio-visual VR technologies to achieve enhanced sensory engagement within interactive experiences. Using subjective measures of presence, we evaluated the impact of different modalities on the virtual experience. Our results indicate that 1) the addition of any singular modality improves sense of presence with respect to traditional audio-visual experiences and 2) providing a combination of these modalities produces a further significant enhancement over the aforementioned improvements.
We present Snow Dome, a Mixed Reality (MR) remote collaboration application that supports a multi-scale interaction for a Virtual Reality (VR) user. We share a local Augmented Reality (AR) user's reconstructed space with a remote VR user who has an ability to scale themselves up into a giant or down into a miniature for different perspectives and interaction at that scale within the shared space.
PHUIs are interfaces built from tangible widgets arranged on the surfaces of physical objects. PHUI-kit is tool for making physical user interface (PHUI) design more like graphical user interface (GUI) design in the context of 3D printed object housings. The tool includes a drag-and drop interface to place, reposition and delete physical widgets on a curved housing in a 3D modeling tool. Automatic cable routing simplifies assembly planning and execution. The objects are printed, the widgets snapped onto a cable and snapped into the housing to create an interactive object. A cable guide printed on paper guides the maker while snapping widgets onto the cable.
In this work, we present a drawing workstation for blind people using a two-dimensional tactile pin-matrix display for in- and output. The dynamic tactile display enables visually-impaired users to directly access an evolving image drawn by a sighted person. In addition, different input modalities, such as palettes of standard shapes accessible by menu or gesture interaction as well as freehand drawing by a digitizer stylus, enable a blind user to create a drawing. By this system, visually-impaired and sighted persons can exchange graphical information.
We present GridDrones, a self-levitating programmable matter platform that can be used for representing 2.5D 15 voxel grid relief maps with capabilities of rendering overhangs and 3D spatial transformations. GridDrones consists of 15 cube-shaped nanocopters that can be placed in a volumetric 1xnxn mid-air grid. Grid deformations can be applied interactively to this voxel lattice by first selecting a set of voxels using a 3D wand, then assigning a continuous topological relationship between voxel sets that determines how voxels move in relation to each other, then drawing out selected voxels from the lattice structure. Using this simple technique, it is possible to create overhanging structures that can be translated and oriented freely in 3D. Shape transformations can also be recorded to allow for simple physical shape morphing animations. This work extends previous work on selection and editing techniques for 3D user interfaces.
Exergaming has become a successful and certified trend in the gaming, health and fitness sector. However, current commercial and research-based solutions often lack a user-centered, symbiotic design approach covering the physical and virtual design level. Consequently, the player cannot experience a maximum attractive and effective individual as well as social user experience. To contribute to this topic, we present the adaptive exergame "Plunder Planet" for children and young adolescents, which can be played as single- and cooperative multiplayer with two different motion-based controllers demanding either haptic or gesture-based input movements. The fitness game environment was designed by an interdisciplinary team of sport scientists, game user researchers, game/industrial/interaction designers and children. So far, "Plunder Planet" was evaluated in various study settings, which proof the benefits of the holistic and interdisciplinary design approach.
In this paper, we describe three interactive prototypes of connected objects conceived as resources. Our aim with these prototypes is to demonstrate a new approach in designing connected technologies for elderly. This approach moves away from the stereotype of older people as frail and passive and addresses them as resourceful individuals. After presenting the design process and the prototypes, we discuss how to shift from the design of scripted products to the design of resources. This entails conceiving tools that can be used in various ways to support elderly's existing competences of resourcefulness in activities they value.
This paper describes an interactive demo of our collaborative research activity with the Museum of Broken Relationships, one of Lonely Planet's 'Fifty Museums to Blow Your Mind'. In collaboration with the Museum, we are currently collecting data worldwide and cross-culturally on the digital possessions individuals associate with their romantic break up, combined with the stories behind those possessions. Taking a methodologically innovative approach, we adapt the Museum's existing practices to conduct research (triangulating existing small-scale interview data) whilst simultaneously generating a new collection for the Museum. In doing so, we foreground contemporary HCI questions of ownership, curation, and presentation of self after a romantic breakup to the public. The demo will exhibit the digital possessions and associated stories that we collect, whilst also giving the CHI community the opportunity to contribute to the collection in real-time at the conference, by sharing digital possessions and stories of their own romantic breakups.
CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user's grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun.
Beyond current dyadic chatbots and voice-based personal assistants, this demo showcases a novel experience where users interact with multiple, text-based conversational systems as if seating around a table with them. Two key technologies allow the seamless interaction between users and chatbots: (1) A state-of-art artificial conversational governance system which allows a natural flow of conversation without the use of vocatives to trigger the chatbot responses; (2) a conversation visualization mechanism where the utterances from participants are aesthetically projected on a tabletop. Those technologies were developed originally for "Café com os Santiagos" an artwork where visitors conversed with three chatbots portraying characters from a book in a scenographic space recreating a 19th-century coffee table. The demo allows users to seat to actually have coffee and chat with multi-chatbots characters.
The Breathing Room is an interactive immersive environment that uses XeThru impulse radar technology by Novelda to create a dynamic interaction between space (environment) and the human respiratory system (body). The Breathing Room enables the participant to synchronize its natural breathing pattern to the manipulation of the space. The architecture of the space consists of 'living hinge' panels, that through their unique CNC cut pattern enable the wooden panels to morph in three dimensions. The installation uses the multiple features (speed, accuracy, material penetrability) of the radar technology to create a bio-friendly environment that enhances relaxation and contemplation.
We present a system that raises awareness about users' inner state. Dišimo is a multimodal ambient display that provides feedback about one's stress level, which is assessed through heart rate monitoring. Upon detecting a low heart rate variability for a prolonged period of time, Dišimo plays an audio track, setting the pace of a regular and deep breathing. Users can then choose to take a moment to focus on their breath. By doing so, they will activate the Dišimo devices belonging to their close ones, who can then join for a shared relaxation session.
We believe that wearables and movement are perfect fit for enhancing tabletop role-playing (TTRPG) experience, since they can provide embodied interaction, are perceived as character-costumes, enhance ludic properties and increase the connectedness to the imaginary game worlds. By providing these improvements, they can increase the immersiveness and player/character relationship which are critical for an ideal TTRPG experience. To investigate this underexplored area, we conducted an extensive research through design process which includes a (1) participatory design workshop with 25 participants, (2) preliminary user tests with Wizard-of-Oz and experience prototypes with 15 participants, (3) production of a new game system, wearable and tangible artifacts and (4) summative user tests for understanding the effects on experience with 16 participants. As a result of our study, we extracted design guidelines about how to integrate wearables and movement in narrative-based tabletop games and communicate how the results of each phase affected our artifacts.
While many still consider interactive movies an unrealistic idea, current delivery platforms like Netflix, commercial VR, and the proliferation of wearable sensors mean that adaptive and responsive entertainment experiences are an immediate reality. Our prior work demonstrated a brain-responsive movie that showed different views of scenes depending on levels of attention and meditation produced by a commercialized home-entertainment brain sensor. Based on lessons learned, this demonstration exhibits the new interactions designed for our new brain-controlled movie, The MOMENT, being released in 2018.
Gaming with large displays can promote whole body interaction, social gameplay among acquaintances and strangers alike, and foster a sense of immersion. We present SpaceHopper, a large-scale, floor-projected version of the game Asteroids, where players bounce on a hopper ball to control their ship. We implemented two interaction modes: bounce to repel, which produces a 360º repulsion wave, and bounce to shoot, which fires a directional laser. Players bounce higher to generate a stronger wave, and bounce faster to fire more lasers. Perspective compensation is applied to the floor display, enabling the player to see the entire game field clearly regardless of their location on the floor.
We propose a light-weight, wearable device ElasticVR to provide various force feedback in multi-level in virtual reality (VR) for more immersive and realistic VR experiences. Force feedback is generally categorized into passive and active force feedback. Passive force is produced passively and continuously against the body part movement, e.g., elasticity and resistance. Active force is triggered by events and rendered actively and discretely to stimulate users, e.g., recoil and impact. ElasticVR consists of an elastic band, servo motors and mechanical locks to provide both passive and active force feedback. By changing length and extension distance of the elastic band, different elasticity levels produce various force feedback levels. By observing multi-level force feedback perception, we provide 5-level passive and 3-level active force feedback on the finger. We expect that realistic force feedback from ElasticVR enhances VR experiences.
4D experiences are one of the most immersive kinds of experiences that current virtual reality technologies can offered. However, the production of 4D contents is still very labor-intensive, and it has been a major obstacle against the wider spread of 4D platforms. In this demonstration, we present two methods that generate motion and vibrotactile effects automatically from the audiovisual content of a movie. Our synthesis methods provide compelling 4D experiences to viewers while greatly improving the productivity.
In the demonstrator associated with this paper we present the 'data-enabled design canvas' that we used to investigate and explore the value of parent tracked data in parent -healthcare professional interactions. The 'data-enabled design canvas' is a collection of physical and digital prototypes, that is open for new design explorations, that aid in utilizing data as creative material for design. Through this demonstrator we highlight the physical and explorative qualities of our approach and give insight in how our research through design approach has contributed to our insights.
CTRL-Labs has developed algorithms for determination of hand movements and forces and real-time control from neuromuscular signals. This technology enables users to create their own control schemes at run-time -- dynamically mapping neuromuscular activity to continuous (real-valued) and discrete (categorical/integer-valued) machine-input signals. To demonstrate the potential of this approach to enable novel interactions, we have built three example applications. One displays an ongoing visualization of the current posture/rotation of the hand and each finger as determined from neuromuscular signals. The other two showcase dynamic mapping of neuromuscular signals to continuous and discrete input controls for a two-player competitive target acquisition game and a single-player space shooter game.
In this research, we propose a fingertip-sized tactile device named HapCube, consisting of three orthogonal voice coil actuators. It provides tangential and normal pseudo-force feedback on a fingertip. The tangential feedback can be created in any desired tangential directions by combining two orthogonal asymmetric vibrations. The normal feedback can simulate various button feedbacks such as the representative four key switches (Black, Red, Blue, and Brown) of CHERRY keyboard company.
Watch-back tactile display is a tempting output option to support a smartwatch. However, a single modal tactile display has a limited channel capacity owing to the small contact area and the limited spatial resolution of the skin. To expand the information bandwidth, we considered combining multiple tactile stimuli that are perceptually distinct. As one form of multimodal tactile display, we demonstrate a wind-vibration tactile display. The display consists of two layers of 2 × 2 (4 points) tactile display. By stacking a vibration motor on the top of a fan, each point of the display can present vibration and small wind on the same area on the wrist independently. Based on the relationship of two tactile modalities, we categorized possible interaction scenarios into three pattern types: Individual, Simultaneous, and Sequential. We demonstrate the interaction scenarios to show the feasibility of multimodal tactile display.
Using technology to convey information and feelings between people is a key goal of many interactive systems, typically with the highest connection fidelity possible. However, the choices made during design and implementation inevitably impact how the communication is perceived. As part of the Empathy Mirror project [4], we explore using technology to instead invert the expressed physical aggression of one participant into a soothing massage for another. Participants take out their aggression on a punching bag. The system detects the magnitude of the blows, and processes them into vibrations rendered via a massage seat to a second participant. Participants reflect on how technology can subvert our intentions, such that the receiver's perception may be very different from what the sender originally communicated. In this case, the most aggressive action is to remove themselves from the exhibit, leaving the receiver with no positive vibes, effectively nullifying the ability to be hostile.
Longitudinal and large-scale studies have shown a strong link between spatial abilities and success in STEM learning and careers. However, many spatial learning materials are either still 2D-based, or not optimized to engage students. In addition, research in spatial cognition distinguishes many spatial abilities as independent from one another. This implies the need to design content and interactions that target individual spatial abilities in order to further research and support training. Following this approach, we describe the design and implementation for a VR system, "Keep the Ball Rolling", controlled with tangible inputs and played as a multi-level game. The system is designed around one central spatial ability: penetrative thinking, which is important in geology, biology, human anatomy, and dentistry among other areas.
We present a system that complements virtual reality experiences with passive props, yet still allows modifying the virtual world at runtime. The main contribution of our system is that it does not require any actuators; instead, our system employs the user to reconfigure and actuate otherwise passive props. We demonstrate a foldable prop that users reconfigure to represent a suitcase, a fuse cabinet, a railing, and a seat. A second prop, suspended from a long pendulum, not only stands in for inanimate objects, but also for objects that move and demonstrate proactive behavior, such as a group of flying droids that physically attack the user. Our approach conveys a sense of a living, animate world, when in reality the user is the only animate entity present in the system, complemented with only one or two physical props. In our study, participants rated their experience as more enjoyable and realistic than a corresponding no-haptics condition.
This demonstration showcases an open-sourced hardware and software platform that was developed to lower the barrier of entry for those wishing to explore more sophisticated haptic interaction techniques. The Haply Development Toolkit allows developers to use the same hardware and software architecture to design force-feedback and vibratory haptic interaction methods from one to four degrees of freedom.
Learning from Demonstration (LfD) is a paradigm where humans demonstrate the procedure to perform complex tasks which can be used to train autonomous agents. However, the performance of LfD is highly sensitive to the quality of demonstrations which in turn depends on the user-interface. In this paper, we propose the use of Virtual Reality (VR) to develop an intuitive interface that enables users to provide good demonstrations. We apply this approach to the task of training a visual attention system which is a crucial component for tasks such as autonomous driving and human-robot interaction. We show that interaction time of few minutes is sufficient to train a deep neural network to successfully learn attention strategies.
The Firefly is a wearable designed to encourage the physical and emotional connection of players of a larp (live action role play game). It is worn as a bracelet whose color changes depending to their connection to another Firefly. This wearable design reflects the positive personal and emotional impact that casual and deeper social interactions may have on people. Preliminary observations of the Firefly within the context of a LARP shows its potential to support collaboration and close interactions for collocated wearers. The Fireflies can inspire others designing social wearables, i.e. wearable technology that supports and enhances social experiences, within and outside the realms of play and games.
With this work, we demonstrate our concept of Reality-Based Information Retrieval. Our principal idea is to bring Information Retrieval closer to the real world, for a new class of future, immersive IR interfaces. Technological advances in computer vision and machine learning will allow mobile Information Retrieval to make even better use of the people's surroundings and their ability to interact with the physical world. Reality-Based Information Retrieval augments the classic Information Retrieval process with context-dependent search cues and situated query and result visualizations using Augmented Reality technologies. We briefly describe our concept as an extension of the Information Retrieval pipeline and present two prototype implementations that showcase the potential of Reality-Based Information Retrieval.
This demonstration presents BioFidget, a biofeedback system that integrates physiological sensing and display into a smart fidget spinner for respiration training. We present a simple yet novel hardware design that transforms a fidget spinner into 1) a nonintrusive heart rate variability (HRV) sensor, 2) an electromechanical respiration sensor, and 3) an information display. The combination of these features enables users to engage in respiration training through designed tangible and embodied interactions, without requiring them to wear additional physiological sensors. The results of this empirical user study prove that the respiration training method reduces stress, and the proposed system meets the requirements of sensing validity and engagement with 32 participants in a practical setting.
Paper-based fabrication techniques offer powerful opportunities to prototype new technological interfaces. Typically, paper-based interfaces are either static mockups or require integration with sensors to provide real-time interactivity. The latter can be challenging and expensive, requiring knowledge of electronics, programming, and sensing. But what if computer vision could be combined with prototyping domain-aware programming tools to support the rapid construction of interactive, paper-based tangible interfaces? We designed a toolkit called ARcadia that allows for rapid, low-cost prototyping of TUIs that only requires access to a webcam, a web browser, and paper. ARcadia brings paper prototypes to life through the use of marker based augmented reality (AR). Users create mappings between real-world tangible objects and different UI elements. After a crafting and programming phase, all subsequent interactions take place with the tangible objects. We evaluated ARcadia in a workshop with 120 teenage girls and found that tangible AR technologies can empower novice technology designers to rapidly construct and iterate on their ideas.
An exploration into consumption of data, led to data visualization using edible materials to create data sculptures that are literally quite consumable. The data, a reflection of a quick summary of a professional profile, is provided by an individual to a custom software. Based on this data, the software generates a visualization for their profile, finally rendered as a Jalebi, a popular Indian sweet that is created by a process resembling food printing.
Procedural art, or art made with programming, suggests opportunities to extend traditional arts; however, this potential is limited by programming tools that conflict with manual practices. We hypothesize that by developing programming environments that align with how manual artists work, we can build procedural systems that enhance, rather than displace, manual art. We present Dynamic Brushes, a programming environment centered around manual drawing. Dynamic Brushes enables the creation of ad-hoc drawing tools that transform stylus inputs to procedural patterns. Applications range from transforming individual strokes to behaviors that draw multiple strokes simultaneously, respond to temporal events, and leverage external data.
Technology adoption factors such as the protection of autonomy, perceived familiarity, and avoiding anxiety are key to the adoption of mobile technologies by older adults [9]. These factors must be addressed in the design of interactions geared towards this population, else products in this space will fail to be adopted by their intended audience.
This paper presents TAGhelper, an interactive tactile aid for tablets that addresses the challenge of adoption of mobile technologies with older adults. TAGhelper seeks to assist older adult users in effectively adopting mobile technologies by providing them with contextually pertinent information about the user's current position in the interface. The interactive tactile aid for tablets proposed in this paper brings together the existing learning preferences of older adults with a tangible and tactile interaction method. By delivering relevant information to the user at the time of need, TAGhelper aims to provide older adults with the means to familiarize themselves with their immediate environment when they become confused by an interface and its functions.
We demonstrate Qetch [1], a time series querying tool, where users freely sketch patterns of interest, without specifying query length or amplitude, on a scale-free canvas. The design of Qetch's interface and matching algorithm were motivated by a crowd study of how humans sketch time series patterns and what sketching errors they tend to make. The demonstration will walk attendees through Qetch's guiding design principles, interface features, and unique matching algorithm. We will also summarize results on Qetch's effectiveness from our user studies.
Idropo is a hydroponic system for children that educates by gardening and playing. For children growing a plant can be challenging, they lose interest easily. With Idropo the cultivation time can be reduced by using an hydroponical system and the educative experience can be enhanced by interacting with a dynamic and friendly companion for gardening educative experiences.
Recent research has shown how to change the color of existing objects using photochromic materials. These materials can switch their appearance from transparent to colored when exposed to light of a certain wavelength. The color remains active even when the object is removed from the light source. The process is fully reversible allowing users to recolor the object as many times as they want. So far, these systems were limited to single color changes (i.e. transparent to colored). We present ColorMod, a method to accomplish multi-color changes (e.g., red-to-yellow). We achieve this using a multi-color pattern with one color per voxel across the surface of the object. When recoloring the object, our system locally activates only those voxels that have the desired color and turns all other voxels off.
Facial thermoreception plays an important role in mediating surrounding ambient and direct feeling of temperature and touch, enhancing our subjective experience of presence. Such sensory modality has not been well explored in the field of Telepresence and Telexistence. We present Ambient, an enhanced experience of remote presence that consists of a fully facial thermal feedback system combined with the first person view of Telexistence systems. Here, we present an overview of the design and implementation of Ambient, along with scenarios envisioned for social and intimate applications.
Gait analysis of walking, running and exercising helps greatly to prevent injury and improves performance in sports activity. However, current mainstream gait detection and locomotion measurement techniques relying on motion capture need bulky devices such as pressure plates and motion capture cameras. They are installed in a laboratory environment and must be operated by professional technicians. In other words, the traditional system in the lab cannot measure outdoor activities.
SaFePlay, Smart Footwear Platform, is a portable lower limb motion monitoring, biomechanics measurement, and analysis system. This system consists of smart insoles, smart knee braces and an analysis engine installed on a mobile device. Users are able to review data over a specific period of time to have a general understanding of his or her gait-related health conditions. The historical data can also be provided to professional coaches or physical therapists to help them do the further action to the users.
SurfaceConstellations is a modular hardware platform that allows users to easily create their own novel cross-device environments by assembling multiple mobile surfaces with 3D printed link modules. Our platform combines the advantages of multi-monitor workspaces and multi-surface environments with the flexibility and extensibility of more recent cross-device setups. The platform includes a comprehensive library of 3D-printed link modules to connect and arrange tablets into new workspaces, several strategies for creating new setups, and a web-based visual configuration tool for creating new setups and automatically generating link modules. We will demonstrate different use-case applications across the design space of reconfigurable cross-device workspaces and the configuration tool.
We demonstrate Project Zanzibar, a flexible mat that locates, uniquely identifies, and communicates with tangible objects placed on its surface, as well as senses a user's touch and hover gestures. Zanzibar integrates efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection. In a series of applications, we showcase Zanzibar's capabilities of interacting with tangibles of varying complexity and interactive capabilities, sensing orientation on the mat, harvesting power, providing additional input and output, stack, or extend sensing outside the bounds of the mat.
We present RealWalk, a pair of haptic shoes for HMD-based virtual reality, designed to create realistic sensations of ground surface deformation through MR fluid (Magnetorheological fluid) actuators. RealWalk offers a novel interaction scheme through physical interaction between the shoes and the ground surfaces while walking in a virtual reality space. Our unique approach of using the property of MR fluid creates a variety of ground material deformation such as snow, mud, and dry sand by changing its viscosity when the actuators are pressed by the user's foot. We build an interactive virtual reality application with four different virtual scenes to explore the design space of RealWalk.
Communication platforms have struggled to provide effective tools for people facing harassment online. Rather than relying on platforms, we consider how harassment recipients can harness their personal community for support. We present Squadbox, a tool to help recipients of email harassment coordinate a "squad" of friend moderators to shield and support them during attacks. Moderators intercept email from strangers and can reject, organize, and redirect emails as well as collaborate on filters. Harassment recipients can highly customize the tool, choosing what messages go through, how moderators should handle particular messages, and if and how they receive rejected messages.
In the demonstration we introduce MuseBeat, a real-time music generator that creates music synchronized with the user's heart rate. MuseBeat uses music to create an emotional awareness for the body, beyond conventional pulse tracking. The mobile application, combined with a fitness tracker, reads the heart rate of the user and generates music in sync. Our research investigates the reactions of people on the generated music during their daily activities. Our initial user feedback shows new awareness for the heart that MuseBeat creates and the potential of MuseBeat to help people to concentrate during reading, to calm down in the beginning of meditation and to fall asleep.
We present metamaterial textures-3D printed surface geometries that can perform a controlled transition between two or more textures. Metamaterial textures are integrated into 3D printed objects and allow designing how the object interacts with the environment and the user's tactile sense. Metamaterial textures offer full control over the transformation, such as in between states and sequence of actuation. This allows for integrating multiple textures that enable functional objects such as, e.g., a transformable door handle, which integrates tactile feedback for visually impaired users, or a shoe sole that transforms from flat to treaded to adapt to weather conditions. In our hands-on demonstration, we show our 3D printed prototypes and several samples. Attendees can touch the objects and explore their different textured states.
This extended abstract describes the technology enabled, social justice intervention of the National Youth Art Movement Against Gun Violence (NYAM) project. NYAM launched its first intervention in Chicago in 2017 in response to escalating rates of gun violence. NYAM empowers youth to use their intrinsic motivations to create artwork that unpacks the deeply layered ways in which violence affects living in Chicago uniquely for each of them. Guided by the composite framework of Transformative Activist Stance (TAS) and a social justice orientation to interaction design, NYAM placed these artworks on billboards in public spaces and enabled them with GPS and Augmented Reality technologies to create unexpected experiences that encouraged participation in gun violence prevention. The current phase of the project was co-developed by an education technology researcher and practitioner, an ethnically and age diverse range of Chicago youth, a local grassroots technology startup, and members of the city's art community.
We demonstrate "Thermorph", a rapid prototyping system aims to print flat thermoplastic composites and trigger them to self-fold into 3D with arbitrary bending angles. An interactive web-based design platform is developed to allow users to input their desired 3D shape or design 2D pattern, and see the printer printing it out automatically. Thermorph employs a desktop FDM printer and low-cost printing filaments to make this 4D printing technique readily accessible for researchers, hobbyists, classrooms, museums and developing countries.
For this demo, we will show two interactive visualizations: Energy Futures and Pipeline Incidents. We designed and developed these visualizations as part of an open data initiative that aims to create interactive data visualizations to help make Canada's energy data publicly accessible, transparent, and understandable. This work was conducted in collaboration with the National Energy Board of Canada (NEB) and a visualization software development company, VizworX.
"One of the Family" is a 10-minute branching narrative experience for the HTC Vive, which was created to explore the potential for third-person player perspective in Virtual Reality. We endeavored to transfer the successes of this removed perspective, where the player is not assigned a character role, while still maintaining the complete immersion that makes virtual reality a unique medium for storytelling. Guests are asked to step into a film noir story, where two characters are in a safe house trying to get away from the police after a failed bank heist. Using a myriad of techniques from games, film, and theater, "One of the Family" demonstrates that third-person can and should be pursued as a method of telling interactive dramas in virtual reality.
In designing social robot motion, displaying emotion is notoriously tricky. There has been some success with arousal, but valence (positivity) remains elusive. Viewing emotion as a longitudinal, context-informed experience rather than a discrete, segmented, and homogeneous event brings nuance to the task of display, with different challenges but also opportunities.
We have observed impact of both structural characteristics (e.g., motion complexity) and narrative frame (developed over time) on a viewer's perception of a given motion's emotional content.
Accompanying our CHI'18 paper, this demo offers exploration of both behaviour complexity and narrative frame through haptic emotion display, via the:
(1) Behaviour playback station, attendees experience the influence of behaviour complexity on valence. (2) Robot customization station, attendees accessorize a base prototype as they mold its personality and story. (3) Tabletop stage, attendees record photos/videos to preserve the stories/behaviours devised for their custom creation.
Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller [6], a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We demonstrate the design of Canetroller in this paper.
Dollhouse play promotes children's creativity and sociality. Implementing a virtual dollhouse in a computer screen offers a more attractive type of play because it removes restrictions on the movements of dolls and the settings of play. However, this method typically compromises the benefits of real doll play. By combining real doll play with a virtual dollhouse, it is possible to incorporate the advantages of both types of play. We develop a virtual dollhouse in a computer display that can be used with real doll play on a tabletop. To connect the real (tabletop) and the virtual (computer display) world, we created a device named "GetToyIn" for users to move their toys (dolls) in and out of the virtual dollhouse. Observations of children aged 4 to 10 years old playing with our dollhouse confirmed that they could understand and operate the design model.
Hand motion and pen drawing can be intuitive and expressive inputs for professional digital 3D authoring. However, their inherent limitations have hampered wider adoption. 3D sketching using hand motion is rapid but rough, and 3D sketching using pen drawing is delicate but tedious. Our new 3D sketching workflow combines these two in a complementary manner. The user makes quick hand motions in the air to generate approximate 3D shapes, and uses them as scaffolds on which to add details via pen-based 3D sketching on a tablet device. Our air scaffolding technique and corresponding algorithm extract only the intended shapes from unconstrained hand motions. Then, the user sketches 3D ideas by defining sketching planes on these scaffolds while appending new scaffolds, as needed. Our progressive and iterative workflow enables more agile 3D sketching compared to ones using either hand motion or pen drawing alone.
In augmented and virtual reality (AR and VR), there may be many 3D planar windows with 2D texts, images, and videos on them. However, managing the position, orientation, and scale of such a window in an immersive 3D workspace can be difficult. Projective Windows strategically uses the absolute and apparent sizes of the window at various stages of the interaction to enable the grabbing, moving, scaling, and releasing of the window in one continuous hand gesture. With it, the user can quickly and intuitively manage and interact with windows in space without any controller hardware or dedicated widget. We demonstrate that our technique is performant and preferable, and that projective geometry plays an important role in the design of spatial user interfaces.
Creating whimsical, personal data visualizations remains a challenge due to a lack of tools that enable for creative visual expression while providing support to bind graphical content to data. Many data analysis and visualization creation tools target the quick generation of visual representations, but lack the functionality necessary for graphics design. Toolkits and charting libraries offer more expressive power, but require expert programming skills to achieve custom designs. In contrast, sketching affords fluid experimentation with visual shapes and layouts in a free-form manner, but requires one to manually draw every single data point. We aim to bridge the gap between these extremes. We propose DataInk, a system supports the creation of expressive data visualizations with rigorous direct manipulation via direct pen and touch input. Leveraging our commonly held skills, coupled with a novel graphical user interface, DataInk enables direct, fluid, and flexible authoring of creative data visualizations.
We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics-display and input modalities, personal/public use, performance, and portability. With this demonstration, we present our conceptual framework and its implementation, which enables analysts to explore data items using both devices in combination. Building on a analysis scenario for convicted crimes in Baltimore, our demonstration gives an impression of how the device combination can allow users to develop complex insights more fluidly by leveraging the device roles.
Life-science research is often driven by advancements in biotechnology. In this demonstration, we explore technology which supports real-time interaction with living matter in the cloud. In order to enable scientists to perform more interactive experiments, we have developed a JavaScript API and corresponding online IDE which can be used to program interactive computer applications allowing the user to remotely interact with swarms of living single-celled micro-organisms in real time. The API interfaces with several remote microscopes which provide a magnified view of a microfluidic chip housing the microorganisms. We hope this work can be a start towards bringing techniques from HCI into bioengineering and biotechnology development.
End-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations.
We present SpokeIt, a novel speech therapy game co-created with users, medical experts, speech pathologists, developmental psychologists, and game designers. Our serious game for health aims is to augment and support in-office speech therapy sessions with an at-home supported speech therapy experience. SpokeIt was co-created using participatory methods involving multiple stakeholders and target users. We describe the technical details of SpokeIt, the process of working with multiple stakeholders, and the methods that allowed us to create medium fidelity prototypes in real-time with players. We share our emerging designs, tools, insights for the broader CHI audience, and plans for future work.
Touch interactions are now ubiquitous, but few tools are available to help designers quickly prototype touch interfaces and predict their performance. For rapid prototyping, most applications only support visual design. For predictive modelling, tools such as CogTool generate performance predictions but do not represent touch actions natively and do not allow exploration of different usage contexts. To combine the benefits of rapid visual design tools with underlying predictive models, we developed the Storyboard Empirical Modelling tool (StEM) for exploring and predicting user performance with touch interfaces. StEM provides performance models for mainstream touch actions, based on a large corpus of realistic data. Our tool provides new capabilities for exploring and predicting touch performance, even in the early stages of design. This is the demonstration of our accompanying paper1.
This work presents the Dynamic Object Scanning (DO-Scanning), a novel interface that helps users browse long and untrimmed first-person videos quickly. The proposed interface offers users a small set of object cues generated automatically tailored to the context of a given video. Users choose which cue to highlight, and the interface in turn adaptively fast-forwards the video while keeping scenes with highlighted cues played at original speed. Our experimental results have revealed that the DO-Scanning has an efficient and compact set of cues arranged dynamically and this set of cues is useful for browsing a diverse set of first-person videos.
Eye movements contain a wealth of information about how we process visual information. However, typical visual representations of gaze only reflect a single feature such as the current fixation point. As eye tracking technology becomes more affordable and ubiquitous it is important to consider the entire design space for visualizing eye movements. The Iris system allows users to quickly design visualizations to represent gaze patterns in real time. The visualization can be manipulated to represent different features of gaze including current fixation, duration, and saccades. Stylistic elements can also be adjusted such as color, opacity, and smoothness to give users creative and detailed control over the design of their gaze visualization.
Spreadsheets on tablet, based on multitouch interaction is tedious and raises multiple interaction issues such as screen size or limited tactile interaction. To answer this limitations, we propose the use of the stacking paradigm, which consists in laying one edge of a smartphone on a tablet screen. It offers the double benefit of augmenting the input vocabulary and extending the display surface. We built a prototype based on a 3D printed shell augmented with copper to detect the smartphone on the tablet screen. We also developed a simplified spreadsheet app on the tablet in which users can select cells using three stacking-based interaction techniques.
Despite increased focus on classroom support, children with cognitive and learning disorders still face high levels of social exclusion in integrative mainstream schools due to impairments in social communication. Following previous structured social technological interventions which have proved useful in increasing social skills of children with Autism, this research outlines the methodologies followed to develop a virtual environment system which may rapidly transform a standard classroom into an interactive play space. Research methods included field studies, stakeholder interviews, requirements gathering, prototype construction and feedback, and usability testing. This research in developing classroom technologies stands to pose significant benefits to this group of individuals at risk of social exclusion due to cognitive or learning conditions.
Youth who experience crisis often do not receive appropriate support and end up in emergency services. Stigma and lack of awareness of other forms of community supports are two primary reasons. An increase in using short-term psychotherapies combined with opportunities driven by technological advancements has led to the popularity of smartphone applications designed for mental health support especially among youth. This community-based project is a collaboration between students from George Brown College's Interaction Design and Development program (Toronto), youth in the Region of Peel, the Provincial System Support Program (PSSP) at the Centre for Addiction and Mental Health (CAMH), the Peel Service Collaborative (PSC) and CoDesign. It involves the creation of a digital platform that draws from existing community services and aids youth in anticipating and dealing with crisis. This three-year initiative has resulted in Mellow, a mobile application for youth. This paper describes the research and design process, challenges, early findings and prototype. It outlines the research methods and key findings about the problems youth face when experiencing crisis and the current approach to managing crisis. It explains the collaborative design methodology employed in the development and testing of the interactive, user-centered, crisis planning tool.
The benefit of urban community farms to crowded cities, such as London cannot be overstated. The Spitalfields City Farm community offers a peaceful escape from busy city life and provides access to learning opportunities. To maintain its daily activities the farm depends on funding, yet due to insufficient feedback data funding opportunities are often missed. We present NaviiCompass, an interactive, handheld device (Figure 1). It enables the Spitalfields Farm community to collect feedback in a seamless, playful manner, while providing educational content to the community. The design was inspired by observations and semi-structured interviews with members of the Spitalfields farm community, revealing their struggle to acquire feedback from visitors. Portable smart compasses direct visitors to stations around the farm where they are asked to give feedback. NaviiCompass empowers the community by enabling them to access the funding they need to support the farm's activities.
Weather-related disasters have the potential to cause devastating impact on community infrastructure, families, and lives. The current state of hurricane relief can be broken down into pre-hurricane preparation and post-hurricane recovery. However, many problems exist in both stages of overall hurricane relief from both the citizen and government perspective; these problems range from the ambiguous information experience of storm preparation, to slow repair protocols for infrastructure damage. In this paper we propose Dawn, a local weather information tool that improves the process of information retrieval and city-wide recovery from hurricanes for citizens and local governments. Dawn targets some of the key problems found in primary and secondary research regarding hurricane preparation and relief. The design provides local government Emergency Management Agencies (EMAs) with a drone swarm weather monitoring system to assess infrastructure damage costs on a large scale, and citizens with hyperlocal weather information based on their specific locations.
This project focuses on how technology could encourage and ease awkwardness-free communications between people in real-world scenarios. We propose Wearable Aura, a device that is able to project a personalized animation around the user. This projection, as an extension of oneself is aware of the context, reacts to user's activity, and interacts with anybody nearby, initiating an interplay with people and taking the burden of making the first move. The Aura, as an enliven spiritual pet, floats around user's feet. We believe that externalized interactive representation of the user in form of a spiritual pet can ease and facilitate the communication, serve as a conversation starter, and make the interactions between people more fun. We believe that Aura will help the people to engage and gather in both, new and already existing, communities.
Classical musicians spend a significant amount of their time practicing alone, which requires both intrinsic and extrinsic motivation to persevere. Recent technological interventions in the field support musicians by focusing heavily on critiquing technique, not emotional expression, or require significant investments in additional audio-visual equipment. We present Musi, a system that connects musicians through asynchronous sharing of recordings to support development of musical expression and collaboration within ensembles. Musi simplifies current processes of recording, sharing, and reflecting on practice sessions, and features a novel interface that enables users to control the system's features using their feet, allowing users to keep their hands and attention focused on their instruments. The system was prototyped and tested with undergraduate music majors at the University of Michigan, Ann Arbor.
We want to encourage community engagement in charity by creating a new type of donation box. We learned that the purchasing power of food is much larger for charities than individuals. Our goal is to encourage cash donations by helping people visualize the impact of their contributions. This has two parts. First, we encourage donors with a series of videos and facts. Among these, we compare the quantity of food an individual can purchase to that of a specific charity. Second, when donors make a contribution, they see the precise quantity of food weighed on the screen in a fun interaction. This paper describes the research methods and processes used in the development of the design.
Every day thousands of Londoners are being exposed to dangerous levels of air pollution, with an estimated 9,500 of Londoners dying prematurely as a result. However, the urgency of this issue is often neglected due to the invisibility of pollutants among us. In order to support environmental organisations such as Greenpeace Camden, a push towards the increase of public awareness is essential when tackling air pollution. We present VisuaLife, an engaging campaign which aims to visualise air pollution in the form of t-shirts displayed in an exhibition. The exhibition will act as a platform for discussion among all residents and empower Greenpeace Camden to call for environment-friendly changes among politicians and industry leaders.
Those living in poverty are often misjudged as being the reason they are in their situation. Many people who do not deal with poverty believe that those who do live in poverty could have easily prevented their situation or can easily get out of it if they work "hard enough"; often creating an "us" and "them" mentality. Because of this misconception, most people do not know how extensive and how many people are affected by poverty. The problem is commonly ignored in Bryan/College Station, TX, mainly on the college campuses in the area. One Step was created to address this misconception and bridge the gap between people who are affected by poverty and those who are not. By using interactive installations and social media to put people in "the shoes" of individuals who live in poverty One Step aims to create an emotional response and understanding in its viewers and nurture a relationship between individuals affected by poverty and students in Bryan/College Station.
The purpose of this design is to both explore and address early body literacy education among communities of parents, children, and health educators. Both informed by and designed for community engagement, the Menstrual Maze is a digitally embedded educational toy that aims to engage these communities in menstrual health education. Through playful interaction, the toy steps through the menstrual process, triggering audio and visual feedback at each stage. Building off of early observations, the goal of the Menstrual Maze is to introduce children to concepts of menstruation at an early age and energize wider community support and engagement with such education. We present this project as a design-led effort to address menstrual taboo and explore early education as a public design space.
How to preserve urban memory under rapid redevelopment is an unresolved issue around the world, especially in China. Going beyond traditional concepts and practices of historic preservation, this paper proposes a solution to local communities through technological intervention. By combining offline storytelling with online collaboration, the Urban Memory project aims to remember communities that would be otherwise forgotten during urbanization.
Illiteracy is an invisible but severe problem. For people with low level of literacy, they cannot access information and navigate freely due to their limited literacy skill. Our target community is adults with literacy level lower than 3rd grade, which means they have difficulties with some everyday tasks. Through a series of design research, including competitive analysis, interview, and contextual inquiry, we came to summarize the characteristics and the need for our target user. On the one hand, they need daily assistance recognizing information and text; on the other hand, they need better tools that assist in learning and understanding concepts. Our final solution is a mobile app, Litebox, which has two main sections: daily assistant, which helps with recognizing text and information, and learning assistant, which includes learning tools that assist in developing reading skill, building vocabulary and understanding calculation. The design of this app is tailored to our target community. With Litebox, we hope to empower our target community.
Social media contains opinions and information in an enormous quantity. With its ubiquitous role in keeping up with current affairs, social media could be used to retrospectively research historical events by treating posts as primary sources, as in History and Journalism. This paper proposes a metric system and interface design for allowing users to sort through Twitter posts. The aim is to allow users to judge social media source reliability and reconstruct details of a historical event. After participants used the sample tool, it was concluded that the framework is a beneficial starting point to formalise a computational process for sorting and visualising Twitter data based on reliability. The study also showed that the presence of URLs in Tweets (backing evidence) and the author's number of followers (reputability) were most important for discerning credibility. This project can inspire future work to integrate this framework into social media applications for data extraction, user feed generation and interface design when presenting posts to users.
Mobile devices such as smartphones, tablets and IoT have undoubtedly been the largest shapers of the 21st century, changing how people live, think and work. Though, many in the developed world and a few privileged in the developing world can attest to this. Due to illiteracy, socio-economic constraints and unavailability of many non-western languages especially African languages on these mobile devices; majority of the people in developing countries are yet to benefit from these mobile devices. Yet, the rise of powerful artificial intelligence and natural language processing present new opportunities for many users of mobile devices to easily interact with their devices in their own vernacular language. Thus, get the best out of their devices.
This paper presents results collected through an online survey across 6 African countries to establish what African users think of existing voice assistants, such as Siri, Google assistant and alike. Additionally, provides recommendations based on our findings that will improve these voice assistants to better understand what Africans say. Therefore, improve the user experience of African users.
Silly Lamp project strives to propose an alternative designerly approach towards Machine Learning and produce semi-abstract knowledge on the relationship that can be established between human and Machine Learning artefacts. Aiming to fill the gap between technology driven research with utility-focused application proposals heavily grounded in the tradition of computer science and the more recent work done from an artistic point of view, described Research through Design process works within the emerging Human-Centred Machine Learning field. In a series of design engagements and experiments, grounded in the present time and simple technology, the relationship between user and machine learning artefact in a home setting is explored, together with the established power relations and human role within the space opened by the design approach.
Patients with Discordant Chronic Comorbidities (DCCs) are dealing with two or more chronic illnesses with opposing treatment instructions. These conditions can make it difficult for patients and health care providers to prioritize and manage the treatment of each individual disease. No technological interventions exist to help these patients prioritize and manage the treatments of all their conditions simultaneously. In this paper, we use the barriers patients face and solutions they wish to utilize in managing their conditions and treatments to motivate the design of a mobile application that can be used to help patients successfully manage their conditions. This mobile application allows patients to better understand their conditions and treatments, successfully manage their medications and other treatments, stay connected with their medical and social support networks, and take steps to achieve goals related to their overall health and well-being.
This study examines how users interact with Google Home, which is a type of home virtual assistant (HVA). Users are expected to speak to HVAs in a conversational manner; however, there has been little research looking at users' mental models for what kinds of interactions they think the devices are capable of. To investigate users' mental models, I conducted user study sessions in which I gave novice users several tasks to complete, and asked them to think aloud as they completed those tasks. I elicited two mental models (push, pull) from verbal strategies they use to complete the task. My findings help to better understand why users may be reluctant to use HVAs, and provide design guidance for future conversational interfaces.
We suggest that the way motivational messages look like (their visual presentation, e.g., typefaces, colors) can influence individual's attitudes related to engaging in physical exercise, independently of whether they are mainly intrinsically or extrinsically motivated to exercise. In case individuals are mainly intrinsically motivated, positive affect from aesthetic pleasure should reinforce positive affective responses. If individuals are extrinsically motivated, the aesthetic pleasure alone should be rewarding and influence their cognitive responses. Moreover, regarding individuals who are extrinsically motivated, positive affect from aesthetic pleasure should strengthen the impact of aesthetic pleasure on individuals' attitudes since it should have a positive impact on the individuals' intrinsic motivation (increasing the feeling of joy or, in other words, impacting their affective responses). If our assumptions are verified, this could be particularly relevant to shape individual's attitudes related to physical activity since affective responses can better predict the intention of doing exercise.
This project aims to assist those suffering from substance addiction so that they might better understand their mental health. I propose a user level in-app relapse report classifier that has an overall accuracy of 0.882 in a dataset where 18% of users report a substance relapse. Interactions between engagement and views of inspirational messages of the day are complex, yet significant. Predictive analytics inspire design implications for sociotechnical systems, which aim to facilitate the addiction recovery process.
The paper introduces Reflex, a mirrored camera mobile game for children and adolescents, especially those with Neuro-Developmental Disorder. The game, offered through a cross-platform application for smart-phones and tablets, bridges the digital and the physical worlds by tracking, via a bottom-looking mirror positioned on the device camera, physical items placed on the table. This new interaction paradigm, its co-designed features and the first pilot study reveals an unexplored potential for learning.
In a social context where a real human interacts with a virtual human (VH) in the same space, one's sense of social/co-presence with the VH is an important factor for the quality of interaction and the VH's social influence to the human user in context. Although augmented reality (AR) enables the superposition of VHs in the real world, the resulting sense of social/co-presence is usually far lower than with a real human. In this paper, we introduce a research approach employing multimodal interactivity between the virtual environment and the physical world, where a VH and a human user are co-located, to improve the social/co-presence with the VH. A preliminary study suggests a promising effect on the sense of copresence with a VH when a subtle airflow from a real fan can blow a virtual paper and curtains next to the VH as a physical--virtual interactivity. Our approach can be generalized to support social/co-presence with any virtual contents in AR beyond the particular VH scenarios.
Satisfactory peer group interactions within a university, through the formation of close associations, define a student's personality and help in deterring the rise of depression caused by academic, financial or emotional troubles. In this work, we conduct a pre-study survey of 177 students in a University setting to assess the requirement for a smartphone-based study to detect and monitor group formation, evolution and engagement. The preliminary results from this investigation reveal that students social interactions are not limited to one but several groups, and the satisfaction levels associated with each type of group are indicative of the average time spent engaging with said group(s). Intra-group bond strength took precedence as a satisfaction determinant over the location or activity engaged in. Further, we present design recommendations for a minimally invasive smartphone-based study.
This paper describes a codesign PhD research project involving survivors of domestic abuse and professional support workers. It aims to address the novel challenges, posed by the Internet of Things, within intimate abusive relationships, with a focus on cyber-aggression and -harassment. Preliminary results from semi-structured interviews with support workers are presented.
Requesting and receiving messages of encouragement on social media has previously been shown to significantly reduce test anxiety for students. We present an empirical study to test whether autogenerated messages of encouragement on social media are as effective as those from real people. Our results both confirm and extend previous research by showing that social encouragement can lower anxiety, but knowingly receiving autogenerated encouragement severely diminishes this effect.
To protect against misuse, mobile operating systems require apps to seek user permission before accessing their personal data. While this measure gives users control, they are often asked for their data without knowing who the data will ultimately be shared with, why, and how often. This paper presents results from the evaluation of a proof-of-concept permission manager for iOS that allows users to adjust privileges granted to apps, substitute their personal data with mock data, and review data that has been transmitted to a server.
A majority of users of smartphones and laptops report that they struggle with effective self-control over their device use. In response, HCI research - as well as a rapidly growing commercial market for 'anti-distraction tools' - has begun to develop apps, browser plugins, and other tools that help users understand and regulate their use. The extensive literature on the mechanics of self-regulation from cognitive neuroscience and behavioural economics might help guide this work. However, so far the emerging HCI work has drawn on a very limited subset of self-regulatory models, in particular Social-Cognitive Theory. Here, we draw together main insights from a broader spectrum of basic research on the mechanics of self-regulation in a simple framework. We use the generated model to analyse interventions in a sample of 112 existing anti-distraction tools, and hope it may contribute a useful alternative view of the design space for UI features that support self-regulation.
Children need education about digital literacy issues appropriate for their age. We conducted a user study to evaluate the usability and effectiveness of a digital literacy game for 11-13 year old children. Results showed that children's digital literacy knowledge and intended behavior improved significantly immediately after playing the game and one week later.
Currently, usable security and web accessibility design principles exist separately. Although literature at the intersect of accessibility and security is developing, it is limited in its understanding of how users with vision loss operate the web securely. In this paper, we propose heuristics that fuse the nuances of both fields. With these heuristics, we evaluate 10 websites and uncover several issues that can impede users' ability to abide by common security advice.
In 2017 the dissemination of scientific research to the public was disrupted by both governmental interventions and cuts in science journalism staff by news organizations. Inaccurate public information about science will bias the public's votes and decisions on science policy. Can science writers respond to these challenges by utilizing Human Computer Interaction (HCI) and Computer Supported Collaborative Work (CSCW)? To explore this question, this project recruited 1) science journalism students, 2) professional science journalists and 3) a panel of scientists to research the bias of information transmitted between them via an HCI and CSCW approach. Journalists wrote articles on science controversies via a web application designed for argumentation. The articles produced were then evaluated by the panel of scientists for accuracy (i.e., number of errors) in a double-blind trial. Findings include: (1) Journalists reported the HCI and CSCW approach increased their comprehension of controversies (2) Scientists confirmed the articles produced via the HCI and CSCW approach as more accurate than those produced without the approach.
This study explores the role of social media use in the social development of underserved or "at-risk" youth in Los Angeles, CA and Lafayette, IN. A major goal of this research is building a knowledge base of this group's social media use to beneficially address their social support needs and aid in their successful assimilation into adulthood. This study employs ethnographic methods of semi-structured intensive interviews and participant observation with Santa Monica and Lafayette Boys & Girls Clubs program participants ages 10-15. Findings may be of particular interest to areas of CMC, HCI, social psychology, and policy/research in youth service development and the intersection of technology and health or well-being. The notion of "at-risk" is interrogated as part of this study.
Modernizing the energy grid to a stable sustainable multi-directional energy generation, consumption and storage in the automated smart home context requires new forms of civic engagement. An interdisciplinary corpus of eco-feedback scholars have highlighted the potential for technologically mediated feedback to make people more aware and cognizant of their energy usage. But most commercial energy monitors draw on economic behavioral models that render residents as rational decision makers who seek to save money on their next energy bill. The more recent research in academia is grounded in social psychology suggesting that prospective energy conservationists are driven by social norms. These individualist perspectives have failed to address the potential of leveraging social behavioral theories to design technologically mediated feedback that enhances individuals' awareness of and willingness to cooperate with the wider digitally interconnected social collective. This research addresses this gap by testing how particular visual structures of feedback graphs increase collective awareness and cooperation in the home energy-monitoring context. Integrating socio-cognitive theories that explain how information cues affect motivations and goal-oriented behaviors with existing work in information visualization, this dissertation theoretically derives and empirically examines a set of visual design patterns for eco-feedback that may make individuals more collective and cooperative.
Smart assistants are the current must-have device in the home. Currently available products do little to respect the autonomy and privacy of end users, but it doesn't have to be this way. My research explores a speculative 'respectful' assistant which is more socially aware, and treats its users in a more nuanced way than occurs at present. Mixing computer science, philosophy, and art, the project uses a combination of user studies and technical comparison to discover a potential future for the smart digital assistant.
While prior work has shown that pair programming is successful in helping students develop confidence and produce better programs, research in collaboration and computing education suggests that it could be designed more effectively as a pedagogical tool. In this paper we present Pyrus, a collaborative programming game that explicitly scaffolds for behaviors linked to developing programming problem-solving skills. We evaluated Pyrus by comparing the amount of planning and verbal explanation demonstrated by pairs of programmers who worked on programming challenges in Pyrus and traditional pair programming. Our results suggest that Pyrus may more reliably support planning than traditional pair programming.
For use on E-Ink displays, a typeface is required to convey information. Whether that information be stories to data, legibility of the information is important. E-ink Displays require a difference in typeface usage due to their lack of color as well as low-resolution capabilities in comparison to modern day displays. We developed two typefaces to assess the quality of legibility for use on E-ink displays in an educational or business environment, using the typeface Arial as a standard. Legibility is measured in terms of how many correct words the subject reads aloud and the time taken to initially speak. Time is not used to assess reading speeds, but to establish difficulty in legibility. Three different font sizes are used to determine any lower bound thresholds on legibility that exist within the typeface. These methods allow for a font to be developed and used with E-ink displays for clear communication.
While there are many existing platforms that help people plan and coordinate collective action, many serendipitous moments for group interaction are left out because we are not able recognize them. In this paper, we aim to facilitate interaction in these missed opportunities through a new form of interaction we call opportunistic collective experiences. These are short, goal-driven activities that opportunistically find participants based on their situational context and allow participants to connect with one another in a low-effort way. In order to create and run these experiences, we present a system called Cerebro, which automatically gathers and coordinates participation in experiences by detecting information about users' physical surroundings. In a preliminary study, we ran three different opportunistic collective experiences and found that they lead to interest in other participants, excitement in participating in unique locations, and increased awareness of one's surroundings.
These days, news regarding various events and accidents are reported quickly through internet and applications. Although most of the breaking news are accurate, the news of ongoing events may deliver the misinformation as the circumstances may change. Although the misinformation is corrected afterwards, people tend to be influenced continually by previous misinformation. Therefore, this study aims to find out what characteristics of news chatbot agent are effective in making correction message. The result shows that the silent news chatbot is more likely to lead people to be influenced by misinformation than news chatbot which also mentions a few words when they are required to give information.
Abstract Despite decades of research into developing abstract security advice and improving interfaces, users still struggle to make passwords. Users frequently create passwords that are predictable for attackers [1, 9] or make other decisions (e.g., reusing the same password across accounts) that harm their security [2, 8]. In this thesis,1 I use data-driven methods to better understand how users choose passwords and how attackers guess passwords. I then combine these insights into a better password-strength meter that provides real-time, data-driven feedback about the user's password. I first quantify the impact on password security and usability of showing users different password-strength meters that score passwords using basic heuristics. I find in a 2,931- participant online study that meters that score passwords stringently and present their strength estimates visually lead users to create stronger passwords without significantly impacting password memorability [6]. Second, to better understand how attackers guess passwords, I perform comprehensive experiments on password-cracking approaches. I find that simply running these approaches in their default configuration is insufficient, but considering multiple well-configured approaches in parallel can serve as a proxy for guessing by an expert in password forensics [9]. The third and fourth sections of this thesis delve further into how users choose passwords. Through a series of analyses, I pinpoint ways in which users structure semantically significant content in their passwords [7]. I also examine the relationship between users' perceptions of password security and passwords' actual security, finding that while users often correctly judge the security impact of individual password characteristics, wide variance in their understanding of attackers may lead users to judge predictable passwords as sufficiently strong [5]. Finally, I integrate these insights into an open-source2 password-strength meter that gives users data-driven feedback about their specific password. This meter uses neural networks [3] and numerous carefully combined heuristics to score passwords and generate data-driven text feedback about a given password. I evaluate this meter through a ten-participant laboratory study and 4,509-participant online study [4]. Under the more common password-composition policy we tested, we find that the data-driven meter with detailed feedback leads users to create more secure, and no less memorable, passwords than a meter with only a bar as a strength indicator. In sum, the objective of this thesis is to demonstrate how integrating data-driven insights about how users create and how attackers guess passwords into a tool that presents real-time feedback can help users make better passwords.
As an industry researcher in 1997, I dove head first into the world of privacy when I joined an international working group that was developing a web privacy standard called the Platform for Privacy Preferences Project (P3P). Released in 2002, the P3P standard allowed websites to communicate about privacy in a computer-readable format that could be consumed by web browsers and other user agents to inform users and take actions on their behalf. I worked with (and eventually led) a team of technologists, lawyers, and policy makers, and became familiar with not only technologies for protecting and invading privacy, but also international privacy laws and self-regulatory privacy programs. As the working group debated details such as the precise definitions of data sharing and identifiable information, I came to the realization that we hadn't thought much about how to make P3P tools usable. Indeed, usability issues had been largely ignored in the development of most security and privacy tools at that time. At the end of 2003, I moved on to academia and focused my research on usable privacy and security. Along with my students and colleagues, I conducted empirical studies to evaluate privacy and security tools, and recommended ways to make these tools more usable. We asked study participants to make privacy sensitive purchases (condoms and sex toys), and conducted some of the first Mechanical Turk studies related to privacy. Our research papers provided empirical evidence about how long it would take people if they actually did read privacy policies (too long!), that ad-industry-driven privacy efforts were largely ineffective, that privacy "nutrition labels" could help people compare company privacy practices, and that many people care enough about privacy to actually pay for it. We presented our research at events at the US Federal Trade Commission (FTC) and on Capitol Hill. In 2016 I spent a year in Washington, DC as Chief Technologist at the FTC. Besides advising the chairwoman and staff, I organized an FTC workshop to discuss methods for evaluating the effectiveness of privacy policies and other disclosures. After having my mobile phone account hijacked, I investigated this form of identity theft, and ended up discussing it on a Today Show segment taped in my kitchen. I also wrote blog posts that raised awareness about privacy concerns associated with open police data, and explained why frequent password changes may not be beneficial. In this talk I will discuss my usable privacy and security research and how it has informed policy work. I will highlight empirical studies that provide insights into users' expectations about privacy and security, as well as their use of privacy and security tools. Finally, I will talk about my experiences at the FTC and ways that members of the CHI community can impact public policy.
I'm an optimist. I entered our field believing that we can use technology to make people's lives better, and agreeing with Lincoln, Drucker, Kay and others that "The best way to predict your future is to create it." For HCI, as the technological tools we have available to design experiences have evolved, our impact has moved from its initial focus on human factors, fitting systems to users (efficiency and effectiveness), to 2nd wave considerations of people in community and the rich context of life, and delight and identity. Third wave HCI recognizes the growing ubiquity of a digital world as the new context that we shape and are shaped by, and its ability to touch every aspect of what it is to be human. It is the ecosystem of digital products that many of us are creating with maturing and diverse design perspectives that is enriching lives. There has certainly been a dark side of those left behind, unintended consequences and apparent failures of promising innovations. In this talk I will be reflecting on lessons learned through these waves of evolution in our field. I'll also be sharing what I believe are some of the properties of the wave that could come if we embrace the transformative power of the next generation of technology to address the toughest social issues we face by baking universal design into the fabric of the digital world.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions. In this dissertation [2], we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user (Figure 1). By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
I have long been interested in how computers can help people perform skilled tasks-a theme that underlies essentially all of my research. As the intentionally convoluted title of this talk implies, I will take a look back at some of this research to explain what came later, and to speculate on what might be next. My earliest work as a graduate student was as part of a team developing novel hypermedia editing and presentation tools for technical documentation of equipment maintenance procedures. Understanding the effort involved in using these tools manually led to my dissertation, which explored rule-based techniques for the automated generation of 3D graphics that could ultimately replace the pictures found in maintenance manuals. Based on information about the task to be depicted, I designed a system that chose the objects to include, determined the level of detail at which to render them (e.g., adding more detail to disambiguate objects that could otherwise be confused with each other), highlighted important objects for emphasis, and created additional "metaobjects" (e.g., arrows to show actions). When I started as a faculty member, several of my students built on this research direction in both 3D and 2D domains, and we were soon collaborating with colleagues to develop generation approaches for coordinated multimedia explanations that combined graphics with text. At the same time, I was becoming excited by the potential of virtual reality (VR) to render objects stereoscopically and interact with them in 3D, in abstract domains as well as physical ones. And it was an easy jump from there to augmented reality (AR), cobbling together a simple monoscopic optical seethrough head-worn display. Our first AR system was a "hybrid" desktop window manager that embedded windows displayed on a flat panel display within a much larger virtual surround presented on the AR display. With that AR display now available as a tool, it became stunningly clear that so much of the effort we'd put into carefully crafting stand-alone pictures to explain tasks was unnecessary. If the user could instead look directly at the actual task domain, then a simple highlight or arrow, overlaid in situ, could show which button to push or knob to turn on a panel of controls.
Increasingly, HCI researchers are recognizing the challenges and opportunities of designing with and for people living with dementia. Recent critiques have highlighted the limited ways people with dementia are engaged in the research and design process. The second CHI HCIxDementia workshop will focus on engagement with and by people living with dementia. Through interactions with local community organizations and people living with dementia, workshop attendees will explore design possibilities. Building on open questions from the CHI 2017 workshop, this workshop will address how HCI researchers can support people living with dementia in engaging as leaders and with research, industry, and the community.
More HCI designs and devices are embracing what is being dubbed "body centric computing," where designs both deliberately engage the body as the locus of interest, whether to move the body into play or relaxation, or to track and monitor its performance, or to use it as a surface for interaction. Most HCI researchers are engaging in these designs, however, with little direct knowledge of how the body itself works either as a set of complex internal systems or as sets of internal and external systems that interact dynamically. The science of how our body interacts with the microbiome around us also increasingly demonstrates that our presumed boundaries between what is inside and outside us may be misleading if not considered harmful. Developing both (1) introductory knowledge and (2) design practice of how these in-bodied and circumbodied systems work with our understanding of the em-bodied self, and how this gnosis/praxis may lead to innovative new body-centric computing designs is the topic of this workshop.
A central viewpoint to understanding the human aspects of interactive systems is the concept of technology acceptance. Actual, or imagined disapproval from other people can have a major impact on how information technological innovations are received, but HCI lacks comprehensive, up-to date, and actionable, articulations of "social acceptability". The spread of information and communication technologies (ICT) into all aspects of our lives appears to have dramatically increased the range and scale of potential issues with social acceptance. This workshop brings together academics and practitioners to discuss what social acceptance and acceptability mean in the context of various emerging technologies and modern human-computer interaction. We aim to bring the concept of social acceptability in line with the current technology landscape, as well as to identify relevant research steps for making it more useful, actionable and researchable with well-operationalized metrics.
The design, development and deployment of new technology is a form of intervention on the social, psychological and physical world. Whether explicitly intended or not, all digital technology is designed to support some vision of how work, leisure, education, healthcare, and so on, is organised in the future [11]. For example, most efforts to make commercial systems more usable, efficient and pleasurable, are ultimately about the vision of increased profits as part of a capitalist society. This workshop will bring together researchers, designers and practitioners to explore an alternative, post-capitalist, "grand vision" for HCI, asking what kind of futures the community sees itself as working towards. Are the futures we are building towards any different from those envisioned by Silicon Valley entrepreneurs, which are typically neoliberal, absent of strict labour laws, licensing fees, tax declarations and the necessity to deal with government bureaucracy?
Long term self tracking of health for periods of years, decades or ultimately lifelong provides tremendous opportunities for personal health. However, people face barriers that many find insurmountable for making use of self tracking. It has become clear that considerable work is needed to turn tracking from a toy to a tool. We suggest three research themes: The user's double role in long-term self-tracking as a consumer of information and as a producer of data, as they try to make sense of long term data, and the needs, challenges and opportunities arising for creating new applications. As a cross-topic issue, we address challenges for HCI research on long term tracking.
The extended abstract describes the background, goals and organization of the sixth International Workshop of Chinese CHI (Chinese CHI 2018).
As mobile visualization is increasingly used and new mobile device form factors and hardware capabilities continuously emerge, it is timely to reflect on what has been discovered to date and to look into the future. This workshop will bring together researchers, designers, and practitioners from relevant application and research fields, including visualization, personal informatics, and data journalism. We will work on identifying a research agenda for mobile data visualization as well as to collect and propagate practical guidance for mobile visualization design. Our overarching goal is to bring us closer to making an effective use of ubiquitous mobile devices as data visualization platforms.
This paper describes a "data-driven educational game design" CHI workshop. The intent of the workshop is to bring together experts from CHI, educational games, learning science and data analytics to discuss how game playing works for learning and how games can be better designed to lead to engagement and learning. The outcome of the workshop will be a journal paper that summarizes the current state-of-the-art in data-driven educational game design and provides recommendations for the way forward for educational game designers and developers.
This workshop focuses on the design of digital interactive technology for promoting sexual wellbeing as a fundamental human rights issue and social justice concern in the field of Human Computer Interaction (HCI). Sexuality related topics have garnered much interest in recent years and there is a need to explicitly engage with the intersections of sexuality and social justice as applicable to the design and development of digital interfaces and interactive experiences. This one day workshop will raise intersectional issues, identify research gaps, gather resources, and share innovation strategies for designing sociotechnical interfaces that promote sexual wellbeing in HCI.
Digital food technologies such as diet trackers, food sharing apps, and 'smart' kitchenware offer promising yet debatable food futures. While proponents suggest its potential to prompt efficient food lifestyles, critics highlight the underlying technosolutionism of digital food innovation and limitations related to health safety and data privacy. This workshop addresses both present and near-future digital food controversies and seeks to extend the existing body of Human-Food Interaction (HFI) research. Through scenarios and food-tech prototyping navigated by bespoke Digital Food Cards, we will unpack issues and suggest possible design approaches. We invite proposals from researchers, designers, and other practitioners interested in working towards a complex framework for future HFI research.
The goal of this one-day workshop is to open space for disruptive techniques and strategies to be used in the making, prototyping, and conceptualizations of the artifacts and systems developed and imagined within HCI. Specifically, this workshop draws on strategies from art, speculative design, and activism, as we aim to productively "trouble" the design processes behind HCI. We frame these explorations as "disruptive improvisations" - tactics artists and designers use to make the familiar strange or creatively problematize in order to foster new insights. The workshop invites participants to inquire through making and take up key themes as starting points to develop disruptive improvisations for design. These include modesty, scarcity, uselessness, no-technology, and failure. The workshop will produce a zine workbook or pamphlet to be distributed during the conference to bring visibility to the role these tactics of making in a creative design practices.
HCI in outdoor recreation is a growing research area. While papers investigating systems in specific domains, such as biking, climbing, or skiing, are beginning to appear, the broader community is just beginning to form. The community still seems to lack a cohesive agenda for advancing our understanding of this application domain. The goal of this workshop is to bring together individuals interested in HCI outdoors to review past work, build a unifying research agenda, share ongoing work, encourage collaboration, and make plans for future meetings. The workshop will result in a report containing a research agenda, extensive annotated bibliography, an article about this topic and plans for unifying the community at future meetings.
Technology has become central to many activities of learning, ranging from its use in classroom education to work training, mastering a new hobby, or acquiring new skills of living. While digitally-enhanced learning tools can provide valuable access to information and personalised support, people with specific accessibility needs, such as low or no vision, can often be excluded from their use. This requires technology developers to build more inclusive designs and to offer learning experiences that can be shared by people with mixed-visual abilities. There is also scope to integrate DIY approaches and provide specialised teachers with the ability to design their own low cost educational tools, adapted to pedagogical objectives and to the variety of visual and cognitive abilities of their students. For researchers, this invites new challenges of how to best support technology adoption and its evaluation in often complex educational settings. This workshop seeks to bring together researchers and practitioners interested in accessibility and education to share best practices and lessons learnt for technology in this space; and to jointly discuss and develop future directions for the next generation design of inclusive and effective education technologies.
The rise of the Internet of Things (IoT) brings abundant new opportunities to create more effective and pleasing tangible user interfaces that capitalize on intuitive interaction in the physical world, whilst utilizing capabilities of sensed data and Internet connectivity. However, with these new opportunities come new challenges; little is still known how to best design tangible IoT interfaces that simultaneously provide engaging user experiences and foster a sense of understanding about the often-complex functionality of IoT systems. How should we map previous taxonomies and design principles for tangible interaction into the new landscape of IoT systems? This workshop will bring together a community of researchers from the fields of IoT and tangible interaction, in order to explore and discuss how parallels between tangible interaction and the properties of IoT systems can best be capitalised on as HCI research moves increasingly toward the Internet of Tangible Things (IoTT). Through ideation and discussion, the workshop will function as a springboard for the community to begin creating new taxonomies and design considerations for the emerging IoTT.
Control interfaces and interactions based on touch-less gesture tracking devices have become a prevalent research topic in both industry and academia. Touch-less devices offer a unique interaction immediateness that makes them ideal for applications where direct contact with a physical controller is not desirable. On the other hand, these controllers inherently lack active or passive haptic feedback to inform users about the results of their interaction. Mid-air haptic interfaces, such as those using focused ultrasound waves, can close the feedback loop and provide new tools for the design of touch-less, un-instrumented control interactions. The goal of this workshop is to bring together the growing mid-air haptic research community to identify and discuss future challenges in control interfaces and their application in AR/VR, automotive, music, robotics and teleoperation.
As our lives become increasingly digitized, how people maintain and manage their networked privacy has become a formidable challenge for academics, practitioners, and policy-makers. A shift toward people-centered privacy initiatives has shown promise; yet many applications still adopt a "one-size fits all" approach, which fails to consider how individual differences in concerns, preferences, and behaviors shape how different people interact with and use technology. The main goal of this workshop is to highlight individual differences (e.g., age, culture, personal preference) that influence users' experiences and privacy-related outcomes. We will work towards best practices for research, design, and online privacy regulation policies that consider these differences.
This workshop aims to generate an interdisciplinary research agenda for digital touch communication that effectively integrates technological progress with robust investigations of the social nature and significance of digital touch. State-of-the-art touch-based technologies have the potential to supplement, extend or reconfigure how people communicate through reshaping existing touch practices and generating new capacities. Their possible impact on interpersonal intimacy, wellbeing, cultural norms, ways of knowing and power relations is far-reaching and under-researched. Few emerging devices and applications are embedded into everyday touch practices, limiting empirical exploration of the implications of digital touch technologies in everyday communication. There is, thus, a real need for methodological innovation and interdisciplinary collaboration to critically examine digital touch communication across social contexts and technological domains, to better understand the social consequences of how touch is digitally remediated. This agenda-setting workshop will bring together HCI researchers and designers with colleagues from sociology, media & communications, arts & design to address key research challenges and build the foundations for future collaborations.
Conventional smart city design processes tend to focus on instrumental planning for city systems or novel services for humans. Interacting with data produced by the new services and restructured systems entailed by these processes is commonly done via interfaces like civic dashboards, leading to a critique that data-driven urbanism is bound by the rules and constraints of dashboard design [1]. Informed citizens are expected to engage with new urban information flows through the logic of dashboard interfaces. What datastreams are left off the dashboard of engaged urban experience? What design opportunities arise when dashboard visualizations are moved into the domain of mixed reality? In this two-day workshop, participants will construct prototype mixed reality interfaces for engaging the informational layer of the built urban environment. Using the Unity game engine and the Microsoft HoloLens, participants will focus on generative design in the space of data-driven interfaces, addressing issues of data access, civic agency, and privacy in the context of smart cities. Specific attention will be paid to interfaces that facilitate harmonious co-existence between humans and non-human systems (AI, IoT, etc.).
Sensemaking is a common activity in the analysis of a large or complex amount of information. This active area of HCI research asks how DO people come to understand such difficult sets of information? The information workplace is increasing dominated by high velocity, high volume, complex information streams. At the same time, understanding how sensemaking operates has become an urgent need in an era of increasingly unreliable news and information sources. While there has been a huge amount of work in this space, the research involved is scattered over a number of different domains with differing approaches. This workshop will focus on the most recent work in sensemaking, the activities, technologies and behaviors that people do when making sense of their complex information spaces. In the second part of the workshop we will work to synthesize a cross-disciplinary view of how sensemaking works in people, along with the human behaviors, biases, proclivities, and technologies required to support it.
The emergence of smart objects has the potential to radically change the way we interact with technology. Through embedded means for input and output, such objects allow for more natural and immediate interaction.
The SmartObjects workshop will focus on how such embedded intelligence in objects situated in the user's physical environment can be used to provide more efficient and enjoyable interactions. We discuss the design from the technology and the user experience perspective.
Harassment, hate speech, and other forms of abuse remain a persistent problem in online communities today. In order to discourage misbehavior in online spaces, we must first understand why everyday people participate in abusive behaviors online. This workshop aims to bring together a diverse range of researchers, practitioners, and activists for cross-disciplinary community-building, foundational understanding, and collaborative ideation. Workshop participants will engage in practical brainstorming exercises, develop research plans and design ideas, and build relationships with other researchers, practitioners, and activists to collaboratively realize ideas generated from the workshop. This one-day workshop is led by industry and academic researchers and will accommodate up to 30 participants.
Artists have been using BCIs for artistic expression since the 1960s. Their interest and creativity is now increasing because of the availability of affordable BCI devices and software that does not require them to invest extensive time in getting the BCI to work or tuning it to their application. Designers of artistic BCIs are often ahead of more traditional BCI researchers in ideas on using BCIs in multimodal and multiparty contexts, where multiple users are involved, and where robustness and efficiency are not the main matters of concern. The aim of this workshop is to look at current (research) activities in BCIs for artistic expression and to identify research areas that are of interest for both BCI and HCI researchers as well as artists/designers of BCI applications.
ACM SIGCHI is the largest association for professionals in HCI that bridges computer science, information science, as well as the social and psychological sciences. Meanwhile, a parallel HCI community was formed in 2001 within the Association of Information Systems (AIS SIGHCI) community. While some researchers have already bridged these two HCI sub-disciplines, the history and core values of these respective fields are quite different, offering new insights for how we can move forward together to sustain the future of HCI research. The main goal of this workshop is to begin building a bridge between these two communities to maximize the relevance, rigor, and generalizability of HCI research.
We are experiencing two revolutions: ubiquitous digital technology and world-wide population aging: digital devices are becoming ubiquitous, and older people are becoming the largest demographic group. However, despite the recent increase in related CHI publication, older adults continue to be underrepresented in HCI research as well as commercially, further widening the digital divide they face and hampering their social participation. Therefore, the overarching aim of this workshop is to increase the momentum for such research within CHI and related fields such as gerontechnology. For this, we plan to create a space for discussing and sharing principles and strategies to design interactions and evaluate user interfaces for the aging population. We thus welcome contributions to empirical studies, theories, design and evaluation of user interfaces for older adults. Concretely, we aim to: map the state-of-art of senior-centred interaction research, build a multidisciplinary community of experts, and raise the profile of this research within SIGCHI.
ACM SIGCHI has been supporting research in HCI education for many years, most actively from 2011-2014. At CHI2014, a workshop on developing a new HCI living curriculum was held, building on three years of research and collaboration. We believe the time is right to develop and implement the suggested HCI living curriculum. We propose a hands-on workshop to develop a concrete active community of practice of HCI scholars and educators, sharing and collaborating to develop course outlines, curricula, and teaching materials. The workshop will define the conceptual framework and user experience of the HCI living curriculum, develop its information architecture and infrastructure, and evaluate how existing platforms do and do not fulfill the proposed needs. Post-workshop initiatives will aim to move towards implementing the first iteration of the living curriculum.
User experience (UX) research is moving from product- and user needs-centric design towards more holistic design for services. At the same time, digitalization is driving the Service Design community towards digital services. Despite the similarity of interests, these two communities have been surprisingly apart. This workshop focuses on the intersection of experience and service design, discussing the ideological and methodological similarities and differences between the two. The workshop has four objectives: (1) to explore ideological and methodological similarities and differences in service and experience design, (2) to share experiences of integrating service and experience design, (3) to identify research themes for the future, and (4) to connect people working in this area.
ArabHCI is an initiative inaugurated in CHI17 SIG Meeting that brought together 45+ HCI Arab and non-Arab researchers/practitioners who are conducting/interested in HCI within Arab communities. The goal of this workshop is to start dialogs that leverage our "insider" understanding of HCI research in the Arab context and assert our culture identity in design in order to explore challenges and opportunities for future research. In this workshop, we focus on one of the themes that derived our community discussions in most of the held events. We explore the extent to which participatory approaches in the Arab context are culturally and methodologically challenged. Our goal is to bring researchers/practitioners with success and failure stories while designing with Arab communities to discuss methods, share experiences and learned lessons. We plan to share the results of our discussions and research agenda with the wider CHI community through different social and scholarly channels.
This workshop aims to develop an agenda within the CHI community to address the emergence of blockchain, or distributed ledger technologies (DLTs). As blockchains emerge as a general purpose technology, with applications well beyond cryptocurrencies, DLTs present exciting challenges and opportunities for developing new ways for people and things to transact, collaborate, organize and identify themselves. Requiring interdisciplinary skills and thinking, the field of HCI is well placed to contribute to the research and development of this technology. This workshop will build a community for human-centred researchers and practitioners to present studies, critiques, design-led work, and visions of blockchain applications.
The rise of evermore autonomy in vehicles and the expected introduction of self-driving cars have led to a focus on human interactions with such systems from an HCI perspective over the last years. Automotive User Interface researchers have been investigating issues such as transition control procedures, shared control, (over)trust, and overall user experience in automated vehicles. Now, it is time to open the research field of automated driving to other CHI research fields, such as Human-Robot-Interaction (HRI), aeronautics and space, conversational agents, or smart devices. These communities have been dealing with the interplay between humans and automated systems for more than 30 years. In this workshop, we aim to provide a forum to discuss what can be learnt from other domains for the design of autonomous vehicles. Interaction design problems that occur in these domains, such as transition control procedures, how to build trust in the system, and ethics will be discussed.
The built environment affects the behaviors and experience of its occupants. Recent technological advances have made it possible to simultaneously quantify features of indoor and outdoor built environments and people's reactions and behaviors within these environments. In a new research methodology, the "living lab", aspects of the environment are varied, and people's consequent reactions and behaviors to environmental changes are measured through a combination of objective automated sensing capabilities and behavioral techniques. This approach can be implemented within tightly controlled lab spaces or out in the real world, creating new opportunities to research human-environment interactions across a variety of environments and populations. In this workshop, we highlight the capabilities of the living lab methodology and share how insights from lab and real-world research can inform innovative applications to improve the health, performance, and well-being of building occupants.
Falling costs and the wider availability of computational components, platforms and ecosystems have enabled the expansion of maker movements and DIY cultures. This can be considered as a form of democratization of technology systems design, in alignment with the aims of Participatory Design approaches. However, this landscape is constantly evolving, and long-term implications for the HCI community are far from clear. The organizers of this one-day workshop invite participants to present their case studies, experiences and perspectives on the topic with the goal of increasing understanding within this area of research. The outcomes of the workshop will include the articulation of future research directions with the purpose of informing a research agenda, as well as the establishment of new collaborations and networks.
Following their introduction in the 1960s, head-mounted VR systems mainly focused on visual and aural senses. In order to enhance immersion in the virtual world, researchers have since pursued the addition of movement and haptics through motion platforms, exoskeletons, and other hand-held devices. From a proliferation of low-cost devices that can sense the user's motion to full body motion capture suits, from gloves to gestures, natural interaction techniques have been desirable and explored in HCI and VR for several years. With virtual reality rapidly becoming accessible to mass audiences, there is growing interest in new forms of natural input techniques to enhance immersion and engagement in multiuser systems. First we need to determine what types of techniques can we design that would integrate well with multiuser experiences. Next, we need to understand the contribution of the designed techniques to the experience, understand how they work with existing controllers, and explore whether they should replace or augment current techniques in order to design more effective and engaging experiences. Finally, it is vital to discern the limitations and the types of application scenarios that are suitable for incorporating the techniques. The aim of this workshop is to deepen and expand the discussion on natural interaction techniques for collaborative VR within the CHI community and promote their relevance and research in HCI.
The extraordinary advances in hardware and networking technology over the past 50 years have not been matched by equivalent advances in software. Today's interactive systems are fraught with limitations and incompatibilities: they lack interoperability and flexibility for end users. The goal of this workshop is to rethink interaction by identifying frameworks, principles and approaches that break these limitations and create true human-computer partnerships.
Hackathons or Hackathon-style events, describe increasingly popular time-bounded intensive events across different fields and sectors. Often cited examples of hackathons include the demanding overnight competitive coding events, but there are many design variations for different audiences and with divergent aims. They offer a new form of collaboration by affording explicit, predictable, time-bounded spaces for interdependent work and engaging with new audiences. This one-day workshop will bring together researchers, experienced event organizers, and practitioners to share and discuss their practical experiences. Empirical insights from studying these events may help position the CHI community to better study, plan and design hackathon-style events and socio-technical systems that support new modes of production and collaboration.
With HCI, researchers conduct studies in interdisciplinary projects involving massive volume of data, artificial intelligence and machine learning capabilities. Awareness of the responsibility is emerging as a key concern for the HCI community.
This Community will be impacted by the General Data Protection Regulation (GDPR) [5], that will enter into force on the 25th of May 2018. From that date, each data controller and data processor will face an increase of its legal obligations (in particular its accountability) under certain conditions.
The GDPR encourages the adoption of Soft Law mechanisms, approved by the national competent authority on data protection, to demonstrate the compliance to the Regulation. Approved Guidelines, Codes of Conducts, Labeling, Marks and Seals dedicated to data protection, as well as certification mechanisms are some of the options proposed by the GDPR.
There may be discrepancies between the realities of HCI fieldwork and the formal process of obtaining Soft Law approval by Competent Authorities dedicated to data protection. Given these issues, it is important for researchers to reflect on legal and ethical encounters in HCI research as a community.
This workshop will provide a forum for researchers to share experiences about Soft Law they have put in place to increase Trust, Transparency and Accountability among the shareholders. These discussions will be used to develop a white paper of practical Soft Law mechanisms (certification, labeling, marks, seals...) emerging in HCI research with the aim to demonstrate that the GDPR may be an opportunity for the HCI community.
We aim to bring together a number of researchers to share their stories and discuss opportunities for improvement in research practice with Third Sector Organisations such as charities, NGOs, and other not-for-profit organisations. Through these discussions, we will develop a framework for good practice, providing guidance on conducting research with these organisations, their staff, and their beneficiaries through ethical methodologies and methods. We will do this by discussing three ways in which working with TSOs impact the work we do: (1) the ways in which this kind of work impacts the third sector; (2) the ways in which it impacts the research itself; and (3) the ways in which it impacts us as researchers and people.
Voice User Interfaces are becoming ubiquitously available, providing unprecedented opportunities to advance our understanding of voice interaction in a burgeoning array of practices and settings. We invite participants to contribute work-in-progress in voice interaction, and to come together to reflect on related methodological matters, social uses, and design issues. This one-day workshop will be geared specifically to present and discuss methodologies for, and data emerging from, ongoing empirical studies of voice interfaces in use and connected emerging design insights. We seek to draw on participants' (alongside organisers') contributions to explore ways of operationalising findings from such studies for the purposes of design. As part of this, will try to identify what can be done to improve user experience and consider creative approaches to how we might ameliorate challenges that are faced in the design of voice UIs.
HCI is a field where diversity should be considered in the systems we build and study. As such, it is important to cultivate a growing group of diverse researchers with a range of experiences to contribute to difficult design, research, and computational problems. Therefore, the CHIMe organizers invite graduate and undergraduate students to attend. CHIMe intends to provide a welcoming environment for mentoring and collaboration amongst peers, faculty, and industry experts in HCI.
This one-day workshop will help early career researchers/academics develop their careers in HCI through intensive interaction with senior mentors from academia and industry who are experienced in research and professional service. Application to the workshop is open to all members of the HCI community who have received their PHDs in the past five years.
The HCI Across Borders (HCIxB) community has been growing in recent years, thanks in particular to the Development Consortium at CHI 2016 and the HCIxB Symposium at CHI 2017. For CHI 2018, we plan to organize an HCIxB symposium that focuses on building the scholarship potential and quality of junior HCIxB researchers - paving new pathways, while also strengthening the ties between the more and less junior members of the community.
The World Health Organization predicts that by the year 2030, mental illnesses will be the leading disease burden globally. Advances in technology create opportunities for close collaboration between computation and mental health researchers. The intersection between ubiquitous computing and sensing, social media, data analytics and emerging technologies offers promising avenues for developing technologies to help those in mental distress. Yet for these to be useful and usable, human-centered design and evaluation will be essential. The third in our series of Symposia on Computing and Mental Health will provide an opportunity for researchers to come together under the auspices of CHI to discuss the design and evaluation of new mental health technologies. Our emphasis is on understanding users and how to increase engagement with these technologies in daily life.
This symposium showcases the latest work from Asia on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and create a fresh research community from Asian region.
CHI can be a multisensory overload. Attendees endure days of workshops, presentations, evening parties, and ephemeral interactions. This paper attempts to disrupt that onslaught of activities [9]. It draws inspiration from theories and methods already in HCI - e.g. mindfulness [1], reflective design [8], and slow design [4, 7] - to bring eight pages of silence to the conference. This is meant to disrupt CHI's busy schedule and help attendees foster resilience. In pursuit of these aims, the authors will use the time and pages offered by this paper to facilitate a group silence; quiet, for just a moment, in the midst of the hurricane that is CHI.
The open-source model of software development is an established and widely used method that has been making inroads into several scientific disciplines which use software, thereby also helping much-needed efforts at replication of scientific results. However, our own discipline of HCI does not seem to follow this trend so far. We analyze the entire body of papers from CHI 2016 and CHI 2017 regarding open-source releases, and compare our results with the discipline of bioinformatics. Based on our comparison, we suggest future directions for publication practices in HCI in order to improve scientific rigor and replicability.
In this alt.chi paper, we reflect on #CHIversity a grassroots campaign highlighting feminist issues related to diversity and inclusion at CHI2017, and in HCI more widely. #CHIversity was operationalised through a number of activities including: collaborative cross-stitch and 'zine' making events; the development of a 'Feminist CHI Programme'; and the use of a Twitter hashtag #CHIversity. These events granted insight into how diversity discourses are approached within the CHI community. From these recognitions we provide examples of how diversity and inclusion can be promoted at future SIGCHI events. These include fostering connections between attendees, discussing 'polarizing' research in a conservative political climate, and encouraging contributions to the growing body of HCI literature addressing feminisms and related subjects. Finally, we suggest how these approaches and benefits can translate to HCI events extending beyond CHI, where exclusion may routinely go undetected.
Equating users' true needs and desires with behavioural measures of 'engagement' is problematic. However, good metrics of 'true preferences' are difficult to define, as cognitive biases make people's preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of what it means to infer what users really want. In this paper, we present an alternative yet very real discussion of this issue, via a fictive dialogue between senior executives in a tech company aimed at helping people live the life they 'really' want to live. How will the designers settle on a metric for their product to optimise?
As data increases in dimensionality or complexity, it becomes difficult to graphically represent data items or series in a straightforward way. Chernoff faces encode data values as features of a human face, but afford only a handful of dimensions, and can be difficult to decode. In this paper, we extend and improve Chernoff faces by merging them with the work of landscape painter Bob Ross, creating data-landscapes glyphs that directly encode data as three series with arbitrary numbers of data items per series. This is pretty obviously a bad idea, yet it is difficult to precisely articulate why, given the current state of the art in academic visualization. We propose and evaluate this technique as a way of highlighting these gaps in our ontology of bad visualization ideas, with the goal of being able to dismiss future bad ideas right out of the gate.
Lickable Cities is a research project that responds to the recent and overwhelming abundance of non-calls for gustatory exploration of urban spaces. In this paper, we share experiences from nearly three years of nonrepresentational, absurdist, and impractical research. During that time, we licked hundreds of surfaces, infrastructures, and interfaces in cities around the world. We en-countered many challenges from thinking with, designing for, and interfacing through taste, including: - how can and should we grapple with contamination?, and - how might lickable interfaces influence more-than-humans? We discuss these challenges to compassionately question the existing framework for designing with taste in HCI.
Because publishing with the ACM is essentially required to advance our careers, we must examine its practices critically and constructively. To this end, we reflect on our experience working with the ACM student publication Crossroads. We encountered rigid content limitations related to sex and sexuality, preventing some contributors from foregrounding their connection to political activism, and others from publishing altogether. We explore the underlying institutional and sociopolitical problems and propose starting points for future action, including developing a transparent content approval policy and new organizations for politically-engaged computing researchers, all of which should center the leadership of marginalized individuals.
Fitness trackers promise a longer and better life for the people who engage with them. What is forgotten in their analysis for HCI, though, is how they re-conceptualise the very notion of what constitutes a 'step'. We discuss everyday edge cases illustrating how fitness trackers fail to address goals and ideals of people using them. They merely re-affirm the fitness of already fit people and can have an adversarial effect on others. For future designers, we offer strategies to become aware of their own biases and provide implications for designers potentially leading to more non-normative and diverse designs of trackers.
We present a design exercise prompted by reports of miniature phones being smuggled into prisons hidden in body cavities of inmates or visitors. For us, this provokes reflection on many aspects of technology which are routinely taken for granted, matters as varied as material form, how we anticipate our 'users' and their lives, the institutional contexts of technology, and how dramatically technologies can be re-appropriated. By generating design ideas for 'a better bumphone', we hope to speak not only to the specific condition of prisoners but also challenge work in HCI to redress its relative neglect of the prison context.
Current HCI research overlooks an opportunity to create human-machine interaction within the unique cognition ongoing during dreams and drowsiness. During sleep onset, a window of opportunity arises in the form of Hypnagogia, a semi-lucid sleep state where we begin dreaming before we fall fully unconscious. To access this state, we developed Dormio, the first interactive interface for sleep, designed for use across levels of consciousness. Here we present evidence for a first use case, directing dream content to augment human creativity. The system enables future HCI research into Hypnagogia, extending interactive technology across levels of consciousness.
Our cultural and scientific understandings of neural networks are built on a set of philosophical ideas which might turn out to be superstitions. Drawing on methodologies of defamiliarisation and performance art which have been adopted by HCI, we present an analog apparatus for the ritualistic performance of neural network algorithms. The apparatus draws on the interaction modes of the Ouija board to provide a system which involves the user in the computation. By recontextualising neural computation, the work creates critical distance with which to examine the philosophical and cultural assumptions embedded in our conception of AI.
Human-computer interaction (HCI) technologies are designed to have outcomes. Many unintentional effects, however, are beyond the scope, notice and accountability of current evaluation practices. Using a new minimalist theory of semiotics called finite semiotics, this paper frames these effects as cognitive externalities. It argues that the well-established economic theory around externalities should be adapted to regulate this aspect of HCI: both to remedy inefficiencies and inequalities, and forestall further exploitation. To this end, the paper calls for recognition of cognitive property rights and for HCI to work towards evaluating any HCI technology as its complete delta in global cognition.
Human-Food Interaction (HFI) scholarship has discussed the possible roles of technology in contemporary food systems and proposed design solutions to various food problems. While acknowledging that there are food issues to be fixed, we propose that there is a room for more experimental and playful HFI work beyond pragmatic problem-solving. We introduce a design research project Parlour of Food Futures that speculates on emerging food-tech practices through the 15th-century game of Tarot. Based on three live Parlour enactments, we discuss what contributions and challenges speculative design methods afford to HFI.
The story "In the Data Kitchen" appeared online in 2017, and went viral, receiving an astonishing degree of attention for an unattributed work with obscure origins. We review this provocative fiction, discussing its evident resonance with societal concerns and ongoing discussions of big-data ethics.
Almost all research output includes tables, diagrams, photographs and even sketches, and papers within HCI typically take advantage of including these figures in their files. However the space given to non diagrammatical or tabular figures is often small, even in papers that primarily concern themselves with visual output. The reason for this might be the publishing models employed in most proceedings and journals: Despite moving to a digital format which is unhindered by page count or physical cost, there remains a somewhat arbitrary limitation on page count. Recent moves by ACM SIGCHI and others to remove references from the maximum page count suggest that there is movement on this, however images remain firmly within the limits of the text. We propose that images should be celebrated - not penalised - and call for not only the adoption of the Pictorials format in CHI, but for images to be removed from page counts in order to encourage greater transparency of process in HCI research.
For decades, HCI scholars have studied technological systems and their relationship to particular contexts and user groups. Increasingly, this scholarship is dependent not only on localized contexts, but also the relationship of local contexts to the global stage, drawing on approaches such as ICT4D and cross-cultural design. In this paper, we examine authors' descriptions of study contexts, particularly country information, in paper titles and texts in the CHI Proceedings from 2013 to 2017. We found strikingly different patterns of titling between studies of Western and non-Western countries, including whether and how country names are mentioned in titles, and the precision when describing study contexts. Drawing on critical theories, we analyze how the politics of titling at CHI functions to build categories of "normal" and "exotic." We explicate the problems that the current ways of representation bring to knowledge production at CHI, and necessary paths to move forward.
In this panel, we discuss the challenges that are faced by HCI practitioners and researchers as they study how voice assistants (VA) are used on a daily basis. Voice has become a widespread and commercially viable interaction mechanism with the introduction of VAs such as Amazon's Alexa, Apple's Siri, the Google Assistant, and Microsoft's Cortana. Despite their prevalence, the design of VAs and their embeddedness with other personal technologies and daily routines have yet to be studied in detail. Making use of a roundtable, we will discuss these issues by providing a number of VA use scenarios that panel members will discuss. Some of the issues that researchers will discuss in this panel include: (1) obtaining VA data & privacy concerns around the processing and storage of user data; (2) the personalization of VAs and the user value derived from this interaction; and (3) the relevant UX work that reflects on the design of VAs?
How do we determine what content and behavior in an online environment does not fit with community standards? What is "harassment" and how has the problem evolved over time? What automated and human-labor approaches are possible to manage inappropriate content? If platforms control what it is possible to say, what are the broader implications for the public sphere? In this panel discussion, we will highlight what we as a community have learned about these issues since this topic was presented as a panel at CHI twelve and twenty-four years ago.
This panel aims to create a space for participants at CHI 2018 to see how far we have come as a community in raising and addressing issues of gender, and how far we have yet to go. Our intent is for open discussion to support the community's intentions to move towards greater equity, inclusivity, and diversity.
In studying the increasing role that opaque, algorithmically-driven systems, such as social media feeds, play in society and people's everyday lives, user folk theories are emerging as one powerful lens with which to examine the relationship between user and algorithmic system. Folk theories allow researchers to better see from users' own perspectives how they understand these systems and how their understanding impacts their behavior. However, this approach is still new. Methods, interpretation, and future directions are up for debate. This panel will be an active discussion of the contribution of folk theories to HCI to date, how to advance a folk theory perspective, and how this perspective can bridge academic and industry study of these systems. Our panel gathers key folk theory HCI researchers from academia and industry to share their perspectives and engage the CHI audience.
An ongoing challenge within the HCI research community is the development of community norms for research ethics in the face of evolving technology and methods. Building upon a successful town hall meeting at CHI 2017, this panel will include members of the SIGCHI Research Ethics Committee, but will be structured to facilitate a roundtable discussion and to collect input about current challenges and processes. The panel will pose open questions and invite audience discussion of best practices focused on issues such as cultural and disciplinary differences in ethical norms. There will also be discussion of how ethical issues are handled in SIGCHI paper submission and reviews, processes for how we might create and disseminate ethics resources, and regulatory issues such as the recent IRB changes for U.S. institutions.
Numerous reports and studies point to increasing performance criteria and workplace stress for academics/researchers. Together with the audience, this panel will explore how we experience this in the HCI community, focussing particularly on what we can do to change this for a slower more sustainable academic culture. The future of good quality HCI research is dependent on happy healthy researchers and reasonable realistic academic processes.
There is sustained and growing interest in human-robot teaming across academia and industry. Many critical questions still remain as to how to foster flexible, effective, teaming that allow humans and robots to work closely together. This panel will bring together experts on human-robot interaction (HRI) across academia and industry to discuss and debate those critical challenges. Panelists will engage the audience in a structured discussion of where current research meets industry demands and the philosophical-to-technical challenges facing the successful integration of human-robot teaming.
The ACM SIGCHI community has been at the forefront of addressing issues of equity and inclusivity in the design and use of technology, accounting for various aspects of users' identities such as gender, ethnicity, and sexuality. With this panel, we wish to explore how we, as SIGCHI, might better target similar goals of equity and inclusivity - across intersections - within our own community. We wish to create a forum for recognizing best practices regarding equity and inclusivity in participants' local and global contexts that we might feasibly integrate across SIGCHI. By equally prioritizing the voices of those in the audience and on the panel, we intend to foster a lively and constructive discussion that will help us chart a way forward. The takeaways from this panel will be articulated into an article for the Interactions magazine, targeting the larger human-computer interaction (HCI) community.
In recent times, many public spaces have virtual reality (VR) games for entertainment (e.g., VR amusement parks). Therefore, VR games should be attractive not only for players but also for bystanders. Current VR systems still focus primarily on enhancing the experience of head mounted display (HMD) users; thus, bystanders without an HMD cannot enjoy the experience to the same extent as HMD users. We propose "ReverseCAVE" towards a shareable VR experience [1]. This is a proof-of-concept prototype for public VR visualization using CAVE-based projection with translucent screens for bystanders. The screen surrounds the HMD user, and the VR environment is projected onto the screens. Bystanders can see the HMD user and VR environment simultaneously, and capture photographs to share with others. Thus, ReverseCAVE can enhance the bystanders' public VR experience considerably and expand the utility of VR.
This video presents BioFidget, a biofeedback system that integrates physiological sensing and display into a smart fidget spinner for respiration training. We present a simple yet novel hardware design that transforms a fidget spinner into 1) a non-intrusive heart rate variability (HRV) sensor, 2) an electromechanical respiration sensor, and 3) an information display. The combination of these features enables users to engage in respiration training through designed tangible and embodied interactions, without requiring them to wear additional physiological sensors. The results of this empirical user study prove that the respiration training method reduces stress, and the proposed system meets the requirements of sensing validity and engagement with 32 participants in a practical setting.
The video describes Puffy, a friendly inflatable social robot companion for children that has been developed in cooperation with a team of psychologists and educators. Puffy has a combination of features which makes it unique with respect to existing robots.
In the Internet of Things, not only people interact with objects, but also other objects; creating more complex material landscapes [2]. However, many thing-to-thing interactions are currently designed within a simple and linear paradigm: using mobile devices to remotely control connected objects. With these videos, we aim to explore more complex interrelationships, using a thing-centered design perspective in a speculative way. To this end, we position things within entanglements: fluid assemblages formed and dissolved in a situated way. This shows new possibilities for everyday objects to co-perform tasks, and 'carry' information that can be interpreted further as new perspectives are gained through interaction. Additionally, as entanglements implies a complicated relationship, we aim to provoke a reflection on the conflicts that may arise between things, and the implications of them (mis)interpreting our behavior.
Having filmmaking codes and techniques, interaction design and sound design at its core, Cinematic Prototyping is a blend of design fiction and interaction prototyping, which enables the exploration and development of new product interactions, made possible by future technologies, without being restricted by the limitations of current prototyping platforms, such as Arduino. As such, it provides designers ways to fully concentrate on the exact interplay between product and user within a specific context. As emerging technologies are researched and developed in consortia consisting of industrial, academic and societal stakeholders, cinematic prototypes can offer shared spaces for dialogue, in which goals can be aligned and critical issues can be discussed. The video presents an example of a cinematic prototype that shows how a smart jacket could support veterans suffering from PTSD when experiencing negative outbursts.
In this video, we showcased "Printed Paper Actuator", a low cost, reversible and electrical actuation and sensing method. We developed the actuator by printing a single layer conductive Polylactide (PLA) on a piece of copy paper via a desktop fused deposition modeling (FDM) 3D printer, which seems simple but indeed realizes electronic control: touch sensing, finger sliding sensing, bending angle detection and control. A software tool that assists the design, simulation and printing toolpath generation is introduced. The video showcases multiple groups of actuator primitives serve as transformation references and various applications from functional modular robots to interactive home environment.
A new wave of collaborative robots designed to work alongside humans is bringing the automation historically seen in large-scale industrial settings to new, diverse contexts. However, the ability to program these machines often requires years of training, making them inaccessible or impractical for many. This project rethinks what robot programming interfaces could be in order to make them accessible and intuitive for adult novice programmers. We created a block-based interface for programming a one-armed industrial robot and conducted a study with 67 adult novices comparing it to two programming approaches in widespread use in industry. The results show participants using the block-based interface successfully implemented robot programs faster with no loss in accuracy while reporting higher scores for usability, learnability, and overall satisfaction. The contribution of this work is showing the potential for using block-based programming to make powerful technologies accessible to a wider audience.
We present a jaw, face, or head movement (face-related movement) recognition and identification system called CanalSense+. It recognizes face-related movements using barometers embedded in earphones. We found that face-related movements change air pressure inside the ear canals, which shows characteristic changes depending on the type and degree of the movement; moreover, such characteristic changes can be used to recognize face-related movements. As a result of an experiment, per-user recognition accuracy was 87.6% for eleven face-related movements. During an experiment, we also found that there are individual differences of changes in the air pressure. Based on this finding, we examined a possibility of user-identification/authentication. As a result, CanalSense+ can identify 12 users with the accuracy of 90.6%.
This video was produced as part of the participatory speculative design project "Sankofa City." The project engages community residents to design emerging technologies (e.g. augmented reality, ubicomp, and self-driving cars) tied to their cultural practices. This collaboration is based in the African-American neighborhood of Leimert Park, in South Los Angeles. With increasing urban development in the area, residents are concerned about cultural displacement.
In this twelve-week workshop series, local residents worked with university students (Figure 1) to design and plan their neighborhood's future. Groups began by brainstorming high-level concepts through "what if" hypothetical questions. Then they created prototypes and scenarios based on their user personas, which were inspired by local residents. Lastly, they created seven collages (Figure 2) and a design fiction video to share at a local stakeholders' meeting, in order to gather larger community feedback. The video portrays an ARenabled self-driving shuttle that dynamically displays local history. The video was particularly effective for gaining community buy-in and has since been shown at a number of local events and festivals. By grounding speculative designs in a community context, students and residents imagined novel technologies that support Leimert Park's local economy and project their cultural heritage into the unknown future.
Food fabrication offers a new dimension to home cooking. We see the design challenge not in automating cooking tasks, but in augmenting the tangible experience, skill building and enjoyment of baking. As an example of such augmentation, we present a novel concept for designing and fabricating roll cakes with custom cross-section graphics.
Roll cakes are made by rolling a flat piece of cake into a spiral. In our application, users draw the cross section image of their cake and we calculate a printable template as a guide to color the unrolled cake. Users mix colored batter and use a custom 3D printed nozzle to arrange the batter on the template.
After baking the batter in the oven, users simply roll it into a roll cake and magically their design is shown on the cross-section.
We identified and solved the key difficulties in making graphics such as a consistent layer thickness and aligning the graphics in the spiral. With Rolling Graphics, we expand the potential of food printing with custom graphics and potentially, custom tastes.
We introduce an Augmented Reality (AR) assistant that reacts to your voice, music and sound. The user can draw and place stick figures in the environment that will dance to the music and respond to specific commands. The stick figure characters can also augment a Google Home by taking its place on top of the speaker and moving in response to the rhythm of the music. They will also change their accessories and clothing depending on the genre of the music, play instruments or change their dancing style. We explore how voice UI may benefit from a visual component that re-enforces a mental model for otherwise structureless interactions. The lack of a visual representation of voice assistants can make the adoption and learning curve slower. Our system can help to discover capabilities of a physical tethered device that otherwise would be imperceptible. We present the first system that embodies a voice-controlled speaker using an AR smart avatar.
We present the "Guts Game", a novel two-player mobile game involving ingestible devices. Our game requires the players to swallow a digital sensor that measures the user's body temperature continuously. Players need to change their body temperature to a certain degree to complete ingame tasks. Points are awarded to players upon completing these tasks. The game ends when one of the players excretes the sensor. The player who received more points at the end of the game wins. By introducing ingestible devices to the field of game design, we might be able to facilitate entertainment experiences for people who need to use ingestible devices for medical use. Furthermore, our work might also help game designers interested in developing novel and rich game experiences.
Hybrid systems between biology and computation to study living organisms have demonstrated potential in promoting children's science experience and better understanding of their actions on the environment. However, these systems offer limited interactions between the user and the biological subject caused by inflexible equipment and missing possibilities to interfere with the biological subject through an interface. We present GrowKit, a digital/physical construction kit for living organisms that enables children to personalize their own experiments in biology. Our findings suggest that the comprehensive scaffolding offered by storytelling cards, experimental building blocks and remote lab software allows young learners to explore a broad range of biological ideas and conduct personally meaningful experiments, and promotes engagement and curiosity in children. We present GrowKit, a digital/physical construction toolkit for biology that provides young learners with playful STEM experience of designing, making, and conducting experiments.
In our personal spaces, we are increasingly surrounded by interactive, connected and engaging "things" that increasingly demand attention and convey a sense of continuous pace. This work showcases how things could be designed from a different perspective: seemingly aware, but intentionally non-engaging. IdleBot is a very furry robotic puppet that is waiting. Unlike many applications in social robotics, IdleBot has neither clear purpose, nor explicit functionality - it merely exists and waits. The subtleness of its interaction, consisting of mostly idle motions, is the starting point to investigate forms of interaction bordering non-interaction situated in a personal context. In two iterations, we designed a fully working interactive prototype that embodies different modes of waiting. The design of waiting behaviors is based on a prior observation study with 20 participants, whose waiting behavior was recorded for each one minute under the false pretense of having to wait for a "real" experiment to start. A Kinect device tracks people in close proximity and allows IdleBot to glance at them in serendipity. The video shows what happened when we released IdleBot into the wild.
We present a design exploration on how water based droplets in our everyday environment can become interactive elements. For this exploration, we use electrowetting-on-dielectric (EWOD) technology as the underlying mechanism to precisely control motion of droplets. EWOD technology provides a means to precisely transport, merge, mix and split water based droplets and has been widely explored for automating biological experiments in industrial 1 and research settings 2. More recently, it has been explored for DIY Biology applications. In our exploration we integrate EWOD devices into a range of everyday objects and scenarios to show how programmable water droplets can be used as information displays, interaction medium for painting and personal communication.
Body in Flow is an interactive experience, presented as a Virtual Reality (VR) art piece: the presentation of a world in flow and the viewer as a body in flow. The core purpose of this piece is to explore the concepts of viewer identity and relationship with the environment. There is both an environmental message and an inquiry into the nature of virtual identity. The piece underscores the concept that we are profoundly connected with our environment. We have an identity and capabilities to move and impact the environment. And the environment has a flow of its own and impact over us. I seek to formulate an identity for the player, and then to play with the boundaries of identity and loss of identity, and also explore the boundaries between player and environment.
The Volca project is a speculative photography research project that is in parts a proposal, a sensory experiment and also continual work in progress. Volca is an experimental camera apparatus for recording what Flusser would term 'technical images'. It's design is to deliberately open the discussion of what it is to be photographer, or collaborator or even facilitator of a complex apparatus with a discrete system at it's core. A system that extends beyond the physical body of the device and reaches into the heart of our human culture of creation, consumption, reflection, storytelling and the active shaping of our identity and self image. The project has several separate aspects dealing with hardware, software, interaction design and media and photographic theory. This paper and the body of work it describes is the first collection of portraits and street photography produced with this new apparatus. The collection is arranged as a virtual reality photographic essay or artists photo-book. The making, distribution, display and consumption of these new speculative photographs is a vital part of the investigation of what can be considered the system of photography. A system where camera and photographer are only a small part of the philosophical whole.
Constructs:Conducts is an immersive interactive work for Montreal's Satosphere Dome. The audience is immersed in a color and sound space. This work involves assigning each element to a single track of a multi-track recording device with up to a maximum of 96 different tone colours assignable per colour image. By manipulating the fader controls on a mixer desk, each sound can be blended to create one overall colour tone of evolving complexity. Each of the visualised colour hues are assigned one overall tonal mass with minute shifts in timbral values and locative displacements. The work is symbolic of the artists' respective fields of enquiry into colour for the Post-digital Futures.
Radical Choreographic Object is a choreographic performance, on a variable scale, that highlights the irreducibility of the bodies and their capacities to generate contexts, expressions and relations, not in order to measure, to predict or preempt, but in order to develop a social ecology of play, risks of unpredictability, the emergence of the unexpected, appropriation and creativity. We propose a version of RCO in virtual reality where the dancers evolve in a blackbox and the viewer has agency in focus and perception.
This paper describes an interactive installation called 'Intermodulator' that generates diverse patterns of moiré image through participants' collaborative and improvisational sound engagement. This installation is comprised of ordinary box fans, backlights, and microphones. The custom-designed system enables the speed of the individual fans and the brightness of the backlights to interact with different live sound inputs. When a certain resonance and tension are achieved between these input sounds, the installation produces seemingly stationary or moiré images of the fans. The main contribution of this installation is to suggest an experiential space where participants can (a) produce artistic audio-visual performances through collaborative improvisation, and (b) empirically explore the key features of collaborative improvisation that promote creativity and learning. This paper introduces the concept, background, and technical details of the project, and proposes its site-specific version for Satosphere Dome at the SAT.
"Weight of Data" is a series of the light art installations that has started with a question, 'What value our digital and non-digital commodities represent for us in data?' In order to make a sense of the value of data-driven virtual commodity into artworks, the artist decided to measure the value of a cryptocurrency into the weight of physical, edible commodities - rice, sugar and salt. Each artwork of this series is composed with 1 Kg of each agricultural raw product with embedded electronic parts. These selected commodities are valuable as being essential for human survivals and they used to function as a currency traded in history. Each artwork displays transformative light patterns in vivid colors through an algorithm that calculates the value of single cryptocurrency into the weight of each commodity value. It refreshes every 2 minutes in real-time along with the numeric ticker on screen that represents for the weight increase or decrease in correlation with the value of cryptocurrency against each commodity. With the embedded layers of cultural and historical context of these commodities, the artist asks an open question on the current issues of data by contrasting tangible ancient currency against virtual new currency.
We propose a dome experience where two voice performers create an interactive audiovisual environment in real-time using speech-to-text and gaming engine technologies. The performance is supported by technology developed as a Virtual Reality application ("I Am Afraid"), modified to take full advantage of the Satosphere. The performers add words and sounds to the space, which are then remixed and blended with effects based on the interaction.
Music listening has changed greatly with the emergence of music streaming services, such as Spotify or YouTube. However, did it inspire us to make new experimental music? Live Coding YouTube is a response to the anticipation of novel performance practices using streaming media. A live coder uses any available video from YouTube, a video streaming service, as source material to perform an improvised audiovisual piece. The challenge is to manipulate the emerging media that are streamed from a networked service given the limited functionality of the API provided. The piece finds parallels in early experimental music that manipulates magnetic tape and vinyl records. On the contrary, the audiovisual space that a musician can explore on the fly is practically infinite. The performance system is built entirely on a web browser and publicly available in the following address: https://livecodingyoutube.github.io/
Respire is a virtual environment presented on a head-mounted display with generative sound built upon our previous work textitPulse Breath Water. The system follows the changes in user's breathing patterns upon which it generates changes in the audio and virtual environment. The piece is built upon mindfulness-based design principles with a focus on the breath as a primary object of the user's attention, and employs various approaches to augmenting breathing in the virtual environment.
Helmetron invites the spectator to an immersive sensory journey through digital pictures transformed into light and sound. A journey not into the pictures' graphics but into their raw data: from the first to the last byte files are transformed into light and sound stimuli. The images and sensations generated are proper to each spectators' experiences and situate the actual artwork within their own perceptive system. Helmetron is made of an LED visor, headphones, and a microcomputer running a program that translates raw data into light and sound. This helmet produces an experience one might think of as a glitchy version of Brion Gysin's Dream Machine. By diving into the heart of the machine we feel its own data flow and experience visual illusions induced by the files transformed into sensory stimuli. Helmetron breaks through the barrier of conventional virtual reality interfaces and allows for an organic link between the participant and the computer's digital processes.
Aura Garden is a Virtual Reality environment where participants can create their own light sculptures using a physical wand and customizable tools. They can also create light sculptures by themselves or with a person in a networked virtual environment using various visual/temporal effects. In the Aura Garden, participants can experience light sculptures from different perspectives, in static or animation, and with different surface effects. Each sculpture will remain in the space and be part of the Aura Garden unless a participant decides to remove it. The real journey starts when a participant navigates around the space and discovers other participants' creations. We use consumer virtual reality product, HTC VIVE: a head-mounted display, a controller, and a motion tracker.
REVIVE explores the affordances of live interaction between the artificial musical agent MASOM, human electronic musicians, and visual generation agents. The Musical Agent based on Self-Organizing Maps (MASOM) has memorized sound objects and learned how to temporally structure them by listening to large corpora of human-made music. MASOM is then able to improvise live interacting with the other (human) performers by imitating the style of what it reminds it of. For each musician, a corresponding visual agent puts its sound and musical decision into images thus allowing the audience to see who does what. This reveals the musical gestures that are so often lost in electronic music performance. For CHI, MASOM plays with two live performers for a 20 minute audiovisual REVIVE experience.
Geometrical Hong Kong is an interactive virtual reality (VR) documentary that provides the audience an immersive touring experience of the lesser-known but visually stunning and highly functional architectural design of Hong Kong public housing estates. In a first-person view, the audience can navigate a series of buildings by changing positions, view the interior and exterior, and listen to brief audio introductions of the architecture. In the design and production of this piece, we formed an indie VR production model that aimed to architect an immersive and interactive experience at an affordable cost. To realize this goal, we chose a simple interaction model and consumer technologies, but emphasized quality cinematography and script writing.
Comprehensive analysis of online conversations is important for effective user engagement in customer care. However, conventional approaches are not effective in analyzing customer care conversations frequently and thoroughly. In this work, we introduce computational tone-based metrics derived from online conversations for customer care managers to quantify customer satisfaction, customer concerns and agent performance. We present a computational approach that seamlessly incorporates domain-specific tone analysis with product features to enable multi-faceted conversation analysis. These computational results are integrated with interactive visualizations for visual aggregation, summarization and explanation of user engagement in online customer care. We demonstrate the usefulness and effectiveness of our approach through user studies with customer care managers in an enterprise context.
Sequence recommender systems assist people in making decisions, such as which product to purchase and what places to visit on vacation. Despite their ubiquity, most sequence recommender systems are black boxes and do not offer justifications for their recommendations or provide user controls for steering the algorithm. In this paper, we design and develop an interactive sequence recommender system (SeRIES) prototype that uses visualizations to explain and justify the recommendations and provides controls so that users may personalize the recommendations. We conducted a user study comparing SeRIES to a black-box system with 12 participants using real visitor trajectory data in Melbourne and show that SeRIES users are more informed about how the recommendations are generated, more confident in following the recommendations, and more engaged in the decision process.
We propose a design space for security indicators for behavioural biometrics on mobile touchscreen devices. Design dimensions are derived from a focus group with experts and a literature review. The space supports the design of indicators which aim to facilitate users' decision making, awareness and understanding, as well as increase transparency of behavioural biometrics systems. We conclude with a set of example designs and discuss further extensions, future research questions and study ideas.
The paper describes an efficient model to detect negative mind states caused by visual analytics tasks. We have developed a method for collecting data from multiple sensors, including GSR and eye-tracking, and quickly generating labelled training data for the machine learning model. Using this method we have created a dataset from 28 participants carrying out intentionally difficult visualization tasks. We have concluded the paper by a discussing the best performing model, Random Forest, and its future applications for providing just-in-time assistance for visual analytics.
Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user's daily life.
Event sequence data is generated across nearly every domain from social network activities and online clickstreams to electronic health records and student academic activities. Patterns in event sequences can provide valuable insights to assist people in making important decisions, such as business strategies, medical treatments, and careers plans. EventAction is a prescriptive analytics tool designed to present and explain recommendations of event sequences. EventAction provides a visual and interactive approach to identify similar records, explore potential outcomes, review recommended action plan that might help achieve the users' goals, and interactively assist users as they define a personalized action plan. This paper presents the first application of EventAction in the digital marketing domain. Our direct contributions are: (1) a report on two case studies that evaluate the effectiveness of EventAction in helping marketers prescribing personalized marketing interventions and (2) a discussion on four major challenges and our solutions in analyzing customer records and planning marketing interventions.
Virtual Reality (VR) has often been discussed as a promising medium for immersive data visualization and exploration. However, few studies have evaluated users' open-ended exploration of multi-dimensional datasets using VR and compared the results with that of traditional (2D) visualizations. Using a workload- and insight-based evaluation methodology, we conducted a user study to perform such a comparison. We find that there is no overall task-workload difference between traditional visualizations and visualizations in VR, but there are differences in the accuracy and depth of insights that users gain. Our results also suggest that users feel more satisfied and successful when using VR data exploration tools, thus demonstrating the potential of VR as an engaging medium for visual data analytics.
We report on an initial examination of the potential of immersive unit visualizations in virtual reality, showing how these visualizations can help viewers examine data at multiple scales and support affective, personal experiences with data. We outline unique opportunities for unit visualizations in virtual reality, including support for (1) dynamic scale transitions, (2) immersive exploration, and (3) anthropomorphic interactions. We then demonstrate a prototype system and discuss the potential for virtual reality visualization to support personal interactions with data.
Designing interactive visualizations is challenging, especially in complex, open-ended contexts. Although numerous interaction techniques have been developed, there is a lack of holistic theoretical guidance in the visualization literature on designing interactivity, where interactivity refers to the overall quality of interaction between a user and a visualization tool. In this work, we aim to contribute to a deeper understanding of the design and evaluation of interactivity by exploring user perceptions in an exploratory context. We report the findings of a lab study and reflect on its potential implications for visualization designers and researchers.
Today's data visualization tools offer few capabilities and no representational standards for conveying uncertainty. Our aim is to remedy this by creating a visual vocabulary for uncertainty in data. However, we must first develop an extensible methodology for validating the effectiveness of uncertainty visualization techniques. In this paper we describe a test instrument we have developed to collect empirical data concerning four measures - accuracy, response time, reported confidence, and cognitive load - that can be used to evaluate techniques for visualizing data with uncertainty.
Comprehension of visual content is linked with the visitor's experience within cultural heritage contexts. Considering the diversity of visitors towards human cognition, in this paper, we aim to investigate whether cognitive characteristics are associated with the comprehension of cultural-heritage visual content. We conducted a small-scale eye-tracking study in which people with different visual working memory capacity participated in a gallery tour and then they were assessed towards exhibit comprehension. The analysis of the results revealed that people with low visual working memory faced difficulties in comprehending the content of the gallery paintings. In this respect, we propose a cognition-centered cultural heritage framework, aiming to provide personalized experiences to visitors and help them improve the content comprehension.
Emerging Augmented Reality (AR) technologies can enable situated interactive visual analytics beyond the screen. However, the presentation and interaction design of data visualization integrated into the physical environment may vary in different scales. Understanding how users manage their spatial relationships with AR visualization under different representational scales is crucial for designing user-friendly AR-empowered visual analytic systems. To this end, we present a study with 16 participants, inviting them to solve two logical reasoning puzzles by interacting with the associated node-link graphs in AR in room- and table-scales respectively. Through observation, interviews, and video analysis, we identify three types of spatial arrangements, which are, positioning the visualization in the figural, vista, or panoramic space of a user. We further explore how scales and visualization design affect users' spatial preferences and exploratory behaviors, and summarize our findings among the three types of spatial arrangements.
Spatial ability is a category of human reasoning skills that plays an important role in affecting a person's development in science, technology, engineering and mathematics. Spatial ability has been demonstrated to be malleable and can be improved through training. In this paper, we present a training scheme by tangible interaction with a reconfigurable robot called EasySRRobot. A preliminary user study based on behavioral and EEG data analysis shows that via interaction with EasySRRobot, users can significantly improve their performance on a task related to spatial ability.
Public displays are becoming more and more ubiquitous. Current public displays are mainly used as general information displays or to display advertisements. How personal content should be shown is still an important research topic. In this paper, we present PD Notify, a system that mirrors a user's pending smartphone notifications on nearby public displays. Notifications are an essential part of current smartphones and inform users about various events, such as new messages, pending updates, personalized news, and upcoming appointments. PD Notify implements privacy settings to control what is shown on the public displays. We conducted an in-situ study in a semi-public work environment for three weeks with seven participants. The results of this first deployment show that displaying personal content on public displays is not only feasible but also valued by users. Participants quickly settled for privacy settings that work for all kinds of content. While they liked the system, they did not want to spend time configuring it.
Engaging in an active life is important for older adults to maintain and promote well-being. In this study, we discuss how older adults who are aging in place actively promote their well-being through the lens of interacting and socializing with others. We conducted an interview study with 10 participants to understand how they develop and maintain interpersonal relationships with others. We found that older adults took charge in creating, building, developing, and maintaining a safety net, which is reflected in the following four representations: pre-determined network, chosen network, interest-based network, and shared-identity network. We also identified three important characteristics of the safety net: co-producing, reciprocating, and strengthening identity, which influence older adults' personal and social experiences.
On-demand video services allow viewers to access media wherever and whenever they like, on a wide variety of devices. These services have become extremely popular in recent years, but exactly how people interact with these services has not been studied in detail. We conducted a diary study with nine households to investigate this, and present the preliminary results in this paper. Participants took advantage of the freedom and choice these services provided, watching on different devices, in different locations, and for extended periods of time. However, the majority of viewing conformed to traditional patterns, occurring in the evening on large screens, though viewing on a laptop was slightly more popular than the television. We found that usage of on-demand services was influenced by situational factors such as location and the devices that are available.
While motivational messages and monetary incentives are all useful intervention strategies to promote people's physical activity, it remains unclear how these strategies work together to increase people's steps day and whether the effect will continue after the intervention period. Here, we investigate how these two interventions affect how adult office workers with sedentary jobs set their goals and achieve these daily step goals. We found that motivational messages can improve people's walking behavior only when their behavior is first influenced by motivational messages, not through money. Although monetary incentives increase people's comMassachusetts Institute of Technologyment to goals, it makes those who with low self-efficacy tend to set easier goals. Monetary incentives may have adversely affect the walking behavior of people who have low self-efficacy. These findings help to shed some lights on how to design persuasive mechanisms for mobile health and fitness applications.
As teen stress and its negative consequences are on the rise, several studies have attempted to tend to their emotional needs through conversational agents (CAs). However, these attempts have focused on increasing human-like traits of agents, thereby overlooking the possible advantage of machine inherits, such as lack of emotion or the ability to perform calculations. Therefore, this paper aims to shed light on the machine inherits of CAs to help satisfy the emotional needs of teenagers. We conducted a workshop with 20 teenagers, followed by in-depth interviews with six of the participants. We discovered that teenagers expected CAs to (1) be good listeners due to their lack of emotion, (2) keep their secrets by being separated from the human world, and (3) give them advice based on the analysis of sufficient data. Based on our findings, we offer three design guidelines to build CAs.
How do fundraisers effectively use emotions in pro-social crowdfunding platforms? Pro-social crowdfunding platforms allow for online campaigns to raise money, and these fundraisers rely more heavily on emotions to attract backers because there are no financial incentives to contribute to these campaigns. People give to pro-social crowdfunding campaigns out of empathy and sympathy for the campaign beneficiaries. Using prior research on empathy, we considered the emotional drivers of empathy and examined how campaigns convey those emotions using campaign pictures and language. We analyzed the facial expressions in campaign pictures and the emotional words in the campaign language, and we found that emotions shape fundraising success. Furthermore, our study shows that the same emotion - happiness/joy - leads to different fundraising success based on whether it is conveyed visually or textually. A future experiment will more explicitly connect these emotions to empathy in potential backers.
Despite HCI's interest in topics around sexuality, pornography has remained underexplored. Specifically, the user experience (UX) of technology-mediated pornography has received little attention, even though it has been argued that it may contribute to a better understanding of pleasure and enjoyability of interactive systems. We surveyed 187 participants about a positive experience involving technology-mediated pornography. Autonomy and competence, as well as sexual arousal and desire emerged as the most characteristic dimensions of positive experiences with pornographic content. However, preliminary qualitative analysis suggests that many participants also experienced mixed feelings due to technology. Overall, our findings provide first insights into the complexity of the user experience of technology-mediated pornography.
Emotion regulation in the wild (ER-in-the-wild) is an important grand challenge problem of increasing focus, and is hard to approach effectively with point solutions. We provide HCI researchers and designers thinking about ERin- the-wild with an ER-in-the-wild system architecture derived from mHealth, the Emotion Regulation Process Model (PM), and a circular biofeedback model that can be used when designing an ER system. Our work is based on literature reviews of and collaborations with experts from the domains of wearables, emotion regulation, haptics and biofeedback (WEHAB) as well as systems. In addition to providing a generic model for ER-in-the-Wild, the system architecture presented in this paper explains different kinds of emotion regulatory interventions and their characteristics.
Previous studies have shown that when individuals join groups for lunch, they tend to conform to the decision of the group. As result, people do not always have the chance to pick the food they wish for, which in turn may have negative consequences, such as not abiding to healthy diets. To address this problem, we created Lunchocracy, an anonymous decision support tool for lunch spots in a workplace based on feedback from a focus group with 7 participants. The tool implements a conversational skype-bot, Lunchbot, that allows users to express interest in joining lunch and to vote for diners to eat at. We deployed the tool for four weeks with 14 participants from the same university department. Post-interviews with 5 participants revealed an overall satisfaction with Lunchocracy, in particular due to it structuring the lunch decision-making and saving time. We discuss how the use of Lunchocracy can positively influence the group's eating dynamics.
Smartphone "addiction" concerns have increased steadily over the past decade. Mainstream media perpetuates these fears, often building on scholarly research in an extreme and dramatized style (e.g., comparing smartphones to heroin and cocaine, claiming that smartphones have destroyed a generation, etc.). We review how the relationship between scholarly research and media outlets engender the idea that smartphones are a danger and perpetuate the view that addiction is a widespread phenomenon. We further explore the origins of 'addiction' measures for technology use and argue that such measures are not sufficient in assessing clinical pathology. We end with preliminary findings from an experimental and interview study of smartphones with parents and teens and explore the role of "addiction" narratives for individual interpretations of smartphone use.
The potential of things or objects generating and processing data about day-to-day activities of its users has given a new level of popularity to Internet of Things (IoT) among its consumers. Even though the popularity has seen a steady increase, the use of IoT devices has been slow and abandonment rapid. To build on the existing literature and advance our understanding of the sociological processes of use and non-use of these devices, this paper presents results from the survey of 489 IoT users. Our qualitative analysis of open ended questions revealed that the motives for use include multi-functionality of devices that provide control over daily activities, social competitive edge, economic advantage, and habit. The justifications for limiting or stopping the use include privacy concerns, information overload and inaccuracy, demotivation because of the reminders about pending or failed goals, no excitement after satisfying initial curiosity, and maintenance becoming unmanageable in terms of effort, time, and money.
Data entry often involves looking up information from email. Task switching to email can be disruptive, and people can get distracted and forget to return to their primary task. In this paper, we investigate whether giving people feedback on how long they are away from their task has any effect on the duration and number of their switches. An online experiment was conducted in which participants had to enter numeric codes into an online spreadsheet. They had to look up these codes in an email sent to their personal email address upon starting the experiment. People who were shown how long they were away for made shorter switches, were faster to complete the task and made fewer data entry errors. This suggests feedback on switching duration may make people more aware of their switching behaviour, and assist users in maintaining focus on their main task.
Having a step-by-step list of instructions for completing a task - a plan - enables people to make progress on challenging tasks, but making plans for tasks is a tedious job. Asking crowdworkers to make plans for others' tasks only works for independent (context-free) tasks, and asking people who have context (e.g., friends or collaborators) has social costs and quality concerns. Our goal is to reduce the costs and improve quality of planning by people who have context in the context-rich domain of writing. We introduce a vocabulary (a finite set of functions pertaining to writing tasks) to aid the planning process. We develop a writing vocabulary by analyzing 264 comments, and compare plans created using this vocabulary to those created without any aid, in a study with 768 comments (N=145). We show that using a vocabulary reduces the planning time and effort compared to unstructured planning, and opens the door for automation and task sharing for complex tasks.
Individuals and organizations increasingly use online platforms to broadcast difficult problems to crowds. According to the "wisdom of the crowd" because crowds are so large they are able to bring together many diverse experts, effectively pool distributed knowledge, and thus solve challenging problems. In this study we test whether crowds of increasing size, from 4 to 32 members, perform better on a classic psychology problem that requires pooling distributed facts.
Labeling images is essential towards enabling the search and organization of digital media. This is true for both "factual", objective tags such as time, place and people, as well as for subjective labels, such as the emotion a picture generates. Indeed, the ability to associate emotions to images is one of the key functionality most image analysis services today strive to provide. In this paper we study how emotion labels for images can be crowdsourced and uncover limitations of the approach commonly used to gather training data today, that of harvesting images and tags from social media.
Incorporating user-generated video (UGV) into professional coverage of public events has the potential to enhance experience through offering alternative perspectives. However, current tools for choosing video rely on objective technical quality metrics that might not identify these offerings. This work uses an Open Profiling of Quality methodology, in which participants freely describe and refine a vocabulary of positive and negative qualities of music festival video. By validation with a second viewer cohort, we show that this method can yield attributes with strong descriptive consensus. Our work should help enhance creative processes for selecting footage that transcend conventional criteria.
Internet of Things (IoT) is recently attracting vendors like Google, Homey, and Samsung that have already brought to the market a plethora of devices and services supporting smart home automation. However, recent studies have shown that end-users having little knowledge of the features and possibilities of IoT devices, face difficulties in conjuring up meaningful use scenarios that combine such devices. Therefore, they fail to anticipate useful configurations beyond those provided by vendors and hence missing out on the vast potential of the IoT. We present an on-going investigation that explores the potential of sourcing IoT-relevant scenarios from a popular microtask-crowdsourcing platform, and a preliminary evaluation of such scenarios with respect to their originality and practicality. This work paves the way for the automated leverage of crowdsourced user scenarios to support IoT end-users.
It can be hard for authors to know if what they write will be clear to their readers. While collaborators can provide expert feedback, their liMassachusetts Institute of Technologyed time and attention makes it costly for authors to continuously solicit detailed input from them. Via a study with ten graduate student authors, we find a clear need for more feedback. Our crowd-based approaches provide an outsider perspective that is timely and detailed, supplementing expert feedback.
We introduce Mass-Computer Interaction (MCI) as a natural evolution of Crowd-Computer Interaction (CCI) fostered by recent technical innovations and advances in large-scale sensing, processing, and interactive systems. MCI represents a sensible combination of (1) a very large number of end-users, usually in the order of hundreds or thousands, (2) very large physical settings, such as theaters and auditoriums, and (3) large-scale infrastructure, including distributed systems. We outline design challenges posed by the new Mass-Computer Interaction paradigm, elaborate on its defining characteristics, and provide a general-purpose model for MCI applications. These contributions are exemplified with SKEMMI, our general-purpose platform specifically designed for developing and deploying Mass-Computer Interaction applications.
A major goal of collaborative ideation is improving the creativity of the ideas generated. Recent approaches enhance creativity by showing users similar ideas during productive ideation and diverse ideas when they reach an impasse. However, related work either demands a higher mental effort from users to assess similarity or yields only a limited number of similarity values. Furthermore, idea relationship is only considered in one dimension (similarity). In our research in progress, we introduce a new approach called concept validation. It enables us to (1) capture the conceptualization of users' ideas and (2) assess multi-dimensional relationships between all ideas in near real-time. We conducted a study with 90 participants to validate the suitability of our approach. The results indicate that adding the extraction of semantic concepts to the ideation process has no negative impact on number and creativity of ideas generated. This signifies an important step towards our vision of an idea-based knowledge graph used by an interactive system to improve computer-supported human creativity.
For global organizations, bringing individuals with diverse background together stimulates idea generation as well as extends existing knowledge base, which is beneficial for group productivity. However, native speakers (NS) may dominate cross-lingual teamwork communicating solely in a common language, which may reduce diversity and harm productivity of the whole group. As a solution, we consider it necessary to support non-native speakers' (NNS) contribution for equalizing NS' and NNS' participation. In this study, we examine how recent language support technologies might affect the group dynamics of collocated multilingual teams consisting of multiple NSs and NNSs. In our experiment, quads of two NSs and two NNSs were provided with a language support tool that integrates machine translation, automatic speech recognition and shared display to enable NNSs participating in multilingual teamwork without requiring them to use a common language. Automatic language detection allowed them to dynamically decide what language to use in their group discussions. We found that using the tool increased the variety of languages used (e.g., Chinese, English and Japanese) and had the potential to equalize NS' and NNS' participation.
Social reminders alert other people to do a task and are effective in promoting intended behaviors. To improve self-motivation of completing a task and produce similar effects of social reminders, we propose "pseudo social reminders", a personal task management system where users can set social reminders by themselves for their own social tasks. The goal of this research is to investigate how pseudo social reminders affect task management and how users perceive these message notifications. We built an Android app that uses notifications showing the sender's name and photo to increase awareness of what users need to do and conducted a one week field study. The results from the user study showed that when receiving a message from a group member related to the task, participants felt social influence. Most participants selected the reminding group that were related to the task and could remember the task with a greater impact through photos as visual aids.
Physicians and nurses in intensive care units and operating rooms are responsible for several patients at the same time. However, monitoring multiple patients can be challenging, for example, because staff are moving and vital signs may not be accessible. Therefore, physicians and nurses may not be able to create a full picture of their patients' status - so called situation awareness. The hands-free operability and portability of a head-mounted display could allow physicians and nurses to monitor vital signs constantly and independently of location. We developed an application that displays vital signs of multiple patients on a Vuzix M300 head-mounted display. In this work, we describe the user-centered design approach, implementation, and future evaluation of the application in the operating room at the University Hospital Würzburg.
Current at-home cardiopulmonary resuscitation (CPR) training systems are limited by the feedback they provide. Virtual trainers have the potential to enhance feedback in CPR training systems by providing real-time demonstrations. We developed CPRBuddy, a CPR training system that uses a virtual trainer to emulate a live trainer to automatically provide powerful feedback. CPRBuddy was evaluated via a user study consisting of 9 participants comparing CPR compressions both with and without feedback from CPRBuddy. We found that CPRBuddy's demonstrative feedback has potential to improve the immediate performance of CPR, e.g., compression depth, frequency, and recoil. This work contributes towards the design of avatars for training.
Medical images and navigation systems support physicians during needle-based interventions. As the information is primarily displayed on monitors, the physician's attention is drawn away from the patient's body. To address this issue, we explore the additional use of a vibration wristband that directs the movements for needle-based operations via different vibration patterns on the operator's arm. We conducted a first user study comparing the combination of tactile and visual guidance versus visual-only feedback with 12 participants to investigate the general feasibility. Our results show that task times, usability scores, cognitive load, and accuracy are comparable for both conditions suggesting that vibration feedback is generally suitable for medical navigation tasks and warranting further iteration and research in this direction.
We describe the challenges of deploying a digital checklist for medical emergencies during an in-the-wild design and evaluation study. The in-the-wild approach allowed for many design iterations to meet the requirements of a safety-critical setting, while also providing lessons for designing in the wild. We faced two major challenges: working with research coordinators as study mediators and adapting training strategies to busy user schedules. We discuss these challenges and approaches to addressing them.
HCI research shows that cardiac patients dislike the passive role imposed by current home monitoring technology. In this paper, we explored how cardiac patients reacted to taking on a more active role of being a diagnostic agent. We developed and implemented a technology probe for these patients to report symptoms and other health metrics to health providers daily . and We studied their interaction with the probe forover eight weeks. Our preliminary findings unfold three themes namely; patient reflection or obsession, patient roles and responsibility towards healthcare staff, and opportunities for nurses to use reports at the hospital in the process of collaborative interpretation. We add to earlier studies, by focusing on the daily, patient-initiated reporting and present topics for further studies.
It can be difficult to take time to reflect on healthy habits and goals. For those living with HIV, it is particularly important to have the opportunity to understand when and how their bodies are reacting to certain medications. In this work, we explore how a medication adherence application could help adults with HIV to reflect on their medication tracking behaviors in a way that promotes adherence. We present qualitative data collected through an early design stage activity by means of the Asynchronous Remote Communities (ARC) method, and a survey measuring attitudes toward a prototype application. We discuss two design implications for medication adherence applications: enabling users to record qualitative data that gives context to adherence data, and providing more visual support for reflection on daily medication behaviors.
In this work, we explored the feasibility and accuracy of detecting motor impairment in Parkinson's disease (PD) via implicitly sensing and analyzing users' everyday interactions with their smartphones. Through a 42 subjects study, our approach achieved an overall accuracy of 88.1% (90.0%/86.4% sensitivity/specificity) in discriminating PD subjects from age-matched healthy controls. The performance was comparable to the alternating-finger-tapping (AFT) test, a well-established PD motor test in clinical settings. We believe that the implicit and transparent nature of our approach can enable and inspire rich design opportunities of ubiquitous, objective, and convenient systems for PD diagnosis as well as post-diagnosis monitoring.
Motion-based games for Health (MGH) are increasingly being adopted for stroke rehabilitation because of their inherent benefits [14]. In this paper, we present the initial results of a novel intervention study in Pakistan's context, that compares the benefits of prescribed manual therapy regarding motor functional improvements of 22 participants who participated in a randomized clinical trial that lasted for a period of 4 weeks with the pre-post intervention standard functional assessments (TUG, WMFT, ARAT). The results indicated that adaptive exergames are not just an alternative to manual therapy but can also provide the additional benefits of enjoyment and motivation (TAM, CEGEQ and informal interviews).
Non-pharmacological interventions are the most common and arguably most effective for people with dementia. Some of these approaches have been proven to benefit from the usage of biographical or personalized materials. These contents are not always easy to obtain. Alongside, it is a challenge to maintain awareness of what is meaningful for a certain person. Faced with an absence of tools to collect and manage biographical materials, we created a web platform that supports the work of psychologists, streamlining the collection of relevant information about people with dementia. This knowledge is then used as a starting point to perform reminiscence and other biographical cognitive stimulation practices. In this paper, we present the design of our platform and results from a case study with one psychologist and three patients, across a period of two weeks that showed improvements in the collection of meaningful data about a person, and on maintaining awareness of the therapy as a whole.
This paper describes the design and development of Jazzy, a Virtual Reality (VR) application for users with Amblyopia. Jazzy has been designed in collaboration with a target end user of the system from its early stages. Jazzy exploits visual layers to display different contents for each eye on the Head Mounted Display (HMD) screens. In this way, the system becomes a controllable tool to stimulate eyes individually. In addition, taking advantage of the HMD associated controllers, the system is able to track the user in the physical space, enhancing the perceived realism during the VR experience. Users can train hand-eye coordination skills in a more lifelike and engaging manner. Jazzy also provides an interface for caregivers empowering them with a new support tool that can be used alongside classic therapeutic artifacts. In this paper we describe the eye-specific parametric visual stimuli and the caregiver's interface to tune them remotely for plug and play activities that can be experienced at home.This paper describes the design and development of Jazzy, a Virtual Reality (VR) application for users with Amblyopia. Jazzy has been designed in collaboration with a target end user of the system from its early stages. Jazzy exploits visual layers to display different contents for each eye on the Head Mounted Display (HMD) screens. In this way, the system becomes a controllable tool to stimulate eyes individually. In addition, taking advantage of the HMD associated controllers, the system is able to track the user in the physical space, enhancing the perceived realism during the VR experience. Users can train hand-eye coordination skills in a more lifelike and engaging manner. Jazzy also provides an interface for caregivers empowering them with a new support tool that can be used alongside classic therapeutic artifacts. In this paper we describe the eye-specific parametric visual stimuli and the caregiver's interface to tune them remotely for plug and play activities that can be experienced at home.
Pregnancy loss is a common complication in pregnancy. Yet those who experience it can find it challenging to disclose this loss and feelings associated with it, and to seek support for psychological and physical recovery. We describe our process for interleaving interviews, theoretical development, speculative design, and prototyping Not Alone to explore the design space for online disclosures and support seeking in the pregnancy loss context. Interviews with 27 women who had experienced pregnancy loss resulted in theoretical concepts such as "network-level reciprocal disclosure" (NLRD). In this paper, we focus on how interview findings informed the design of the Not Alone prototype, a mobile application aimed at enabling disclosure and social support exchange among those with pregnancy loss experience. The Not Alone prototype embodies concepts that facilitate NLRD: perceptions of homophily, anonymity levels, and self-disclosure by talking about one's experience and engaging with others' disclosures. In future work, we will use Not Alone as a technology probe for exploring and evaluating NLRD as a design principle.
Fatal overdoses are a common symptom of the opioid epidemic which has been devastating communities throughout the USA for decades. Philadelphia has been particularly impacted, with a drug overdose death rate of 46.8 per 100,000 individuals, far surpassing other large cities' rates. Despite city and community efforts, this rate continues to increase, indicating the need for new, more effective approaches aimed at Massachusetts Institute of Technologyigating and combating this issue. Through a human-centered design process, we investigated motivators and barriers to participation in a smartphone-based system that mobilizes community members to administer emergency care for individuals experiencing an overdose. We discuss evidence of the system's feasibility, and how it would benefit from integration with existing community-based efforts.
In this paper, we address designing for the delivery of timely cues to initiate skill-building exercises for improving emotional and cognitive control. W focus on adults with ADHD, as they frequently experience difficulties related to such control. We describe the design and current user experience evaluation of TimeOut - a skill-building assistive technology to be used by adults with ADHD to improve long-term mastery of self-regulatory abilities. TimeOut, in its current iteration, consists of a wristband monitoring physiological signals, visualization of these signals, an algorithm to prompt interventions, and the delivery of a skill-building exercise on a mobile phone.
Home assistants such as Amazon's Echo and Google's Home have become a common household item. In this paper we investigate if and what consumers have reported online (in the form of reviews) related to privacy and security after purchasing or using these devices.
We use natural language processing to first identify privacy and security related reviews, and then to investigate the topics consumers discuss within the reviews. We were interested in understanding consumers' major concerns.
Issues and/or concerns related to security and privacy have have been reported within reviews; however, these topics only account for 2% of the total reviews given for these devices. Three major concerns were highlighted in our findings: data collection and scope, "creepy" device behavior, and violations of personal privacy thresholds.
Technologically-mediated learning environments are becoming increasingly popular, however remote learning still lacks many of the important interpersonal features that are leveraged in effective co-located learning. Recent work has started to build in non-verbal cues to support remote collaboration, such as showing pairs where their partner is looking on the screen. This method of displaying gaze visualizations has been shown to support coordination and learning in remote collaborative tasks. However, we have yet to explore how this technique scales to support multiple students with one teacher in a technology-mediated learning environment. In this study, we design and evaluate a system for displaying real time gaze information from multiple students to a single teacher's display during a computer science studio session. Our results suggest that multiple gaze visualizations can improve the teaching experience in remote settings. Further, we provide design recommendations for future systems based on our preliminary results.
The overarching goals of this study are to better understand how monitoring the writing processes can be deeply integrated into writing practices using writing revision analytics and visualizations that may ultimately impact writing self-efficacy; and to examine the effects of a writing revision tool on users' writing awareness during individual and collaborative writing. We developed Itero, a writing revision history analytics application that allows users to observe their writing behavior via visualizations and descriptive statistics. In this paper, we report findings from two pilot studies: one focused on user experiences during an individual writing task and the other during a collaborative writing task. Findings show evidence that Itero may increase users' writing self-awareness and understanding of their collaboration structure, and warrants further investigation.
We present the initial design and evaluation of Kalgan, an interactive video player tailored to support casual language learning. Unlike traditional, more structured learning, casual language learning takes place alongside existing leisure activities such as watching videos. To better support learning during casual video viewing, we introduce several new features, including subtitle-aware rewind, interactive subtitle translation, and word lookup history that help people quickly access language content and recover missed information without disrupting their viewing experience. We evaluated this initial version of Kalgan using a mix of remote studies and in-person observation. The results of the evaluation highlight our participants' enthusiasm for subtitle-centric learning aids. They also suggest a variety of future research opportunities for casual language learning tools.
Students increasingly have access to information that can be posted by anyone without being vetted, and it becomes vital to support students in evaluating claims through debate and critical thinking. To address this issue, this paper designs and evaluates a light-weight but effective protocol for supporting debate in a classroom activity with university students. It evaluated participants' beliefs on controversial topics (e.g., homeopathy) before and after briefly learning about critical thinking tools, posting arguments, and critically evaluating the arguments of peers. The findings suggest that this intervention led to a statistically significant belief change, and that this change was in the direction of the position best supported by evidence. Consequently, this work in progress presents a constructive approach to scaffold debates in the classroom and beyond.
We envision a novel framework for observation tools in Montessori classrooms and other learning environments to facilitate informed interaction between teachers, caregivers, and children. An ongoing project is described as a case study in Montessori philosophy inspired sensing: designing unobtrusive sensor networks to understand and reflect on a child's learning progress, by instrumenting existing Montessori learning materials using distributed sensing techniques.
The Block Talks toolkit combines the educational potential of tangible computing and augmented reality (AR) technologies to help children learn English sentence construction. Although examples of tangible AR reading systems for children currently exist, few focus specifically on learning sentence structure. Block Talks was developed using ordinary teaching supplies including letter tiles and blocks that can be manipulated to form words and sentences. A companion app allows children to scan these sentences to receive audio and AR feedback. Block Talks takes advantage of colour cues to draw children's attention to sentence structure patterns. This paper outlines existing tangible and AR systems for literacy learning, details the Block Talks design rationale, and concludes with a discussion of the advantages of using a combined tangible and AR approach for teaching sentence construction.
Mastering pragmatic competence, the ability to use language in a contextually appropriate way, is one of the most challenging parts of foreign language learning. Despite its importance, existing language learning systems often focus on linguistic components such as grammar, vocabulary, or pronunciation. Consequently, foreign language learners may generate grammatically flawless speech that is contextually inappropriate. With the diverse socio-cultural contexts captured in real-life settings, videos at scale can serve as a great material to acquire pragmatic competence. We introduce Exprgram, a web-based video learning interface that assists learners to master pragmatic competence. In Exprgram, learners can raise their context-awareness, practice generating an alternative expression, and learn diverse alternative expressions for the given context. Our user study with 12 advanced English learners displays the potential in our learnersourcing approach to collect descriptive context annotations and diverse alternative expressions.
Children's emotional skills are important for their success. However, children with Autism Spectrum Disorders have difficulties in understanding social contexts and recognizing and expressing facial expressions. In this paper, we present the design of EmoStory, a game-based interactive narratives system that supports children's emotional development. The system uses animation and emotional sounds to teach children six basic emotions and facial expressions in various social contexts, and also provides multi-level games for children to systematically practice the learnt skills. Through using facial expression recognition technique and designing animated visual cue for important facial movement features, the system helps children to practice facial expressions and provides them with explicit guides during the tasks.
To support the acquisition of drawing skills, this research explores a learnersourcing approach to generating personalized learning points. These are annotations containing a clip of a drawing process, a description, and an explanation. This paper presents ShareSketch, a web-based drawing system that enables learners to practice drawing, review the drawing process, and share their works with others. In particular, we propose the before after-practice reflection workflow that allows learners to generate learning points before or after each short practice. We evaluated our reflection workflows with eight self-motivated drawing learners. The results showed that our reflection workflow can guide learners to generate high-level subgoal or concept labels, low-level steps, and personalized coping strategies.
We present PARTICIPATE, a technology probe exploring how to strengthen the connection between activities taking place at public libraries and their collections, both in the digital realm and in the physical space. Based on ethnographic studies and participatory design activities, we derive three core implications for place- and activity centric library services. These implications led us to design PARTICIPATE in collaboration with library staff from three European countries. The probe is a mean to investigate how place- and activity-centric digital services in the library space can engage participants in co-creating knowledge, and enable libraries to integrate activities with library collections.
Deciding on what to see in a large museum can be overwhelming. We present FieldGuide, a smartwatch based system designed to facilitate museum gallery exploration. We discuss our design and implementation, followed by an evaluation conducted with twelve visitors in a natural history museum. Our findings describe how smartwatches can fit into a multi-display museum environment and strike a balance between personal and public interactions.
Whilst there is increasing work investigating the role of digital augmentation of outdoor cultural heritage sites, such augmentations have largely focused on visual and auditory modalities. We present initial findings from a field study of 29 visitors to a Finnish outdoor recreational island who used an exploratory cultural heritage app, augmented with a set of proximity triggered multimodal boxes, to present multimodal and multisensory content (ranging from smell, audio and physical interactions). We outline how this enhanced visitor's experience, as well as practical issues in the use of the boxes, and future development of our work.
Playing exercise games together can be more enjoyable and motivating than playing alone. There are a variety of reasons why the experience of social interactions can motivate both participation in exercise and engagement in videogames. However, current game player pairing recommendation systems are unsatisfactory for facilitating social connectedness. Based on personality and social psychology theories, this work explores the possibility of matching players by personality for increasing enjoyment and social interaction in exergames. Maintaining high levels of enjoyment and positive social interactions are important because both can promote retention and continuation of gameplay and exercise adherence. Early results of a pilot study seem to show that player pairs who score high on openness and extraversion particularly enjoyed their game experience together.
The purpose of this work's larger project is to investigate if users' basic psychological needs within Self-Determination Theory are satisfied through game design elements over time and if this influences user retention. It is also to be investigated if there is a specific order of satisfaction of these needs. As a preliminary work, 47 participants filled out a survey on their use of gamified fitness- and tracking apps. They were asked about their enjoyment of use, duration and satisfaction of their basic needs. Results showed that users' need for competence seemed to be satisfied in the starting phase, which was also what kept them using it. However, they would eventually quit for reasons unrelated to the apps. Directions for the upcoming diary study are outlined.
One fundamental way how players relate to games is through the avatar they are represented by. In this work, we explore the impact of realtime adjustment of avatar appearance (visual body weight) on player motivation in exergames. Our results draw a mixed picture: while qualitative findings show that players were aware of adaptations, quantitative results do not demonstrate significant differences in player motivation. Instead, observations suggest that players leveraged the mechanic to develop their own meta-game, an effect that needs to be considered when designing mechanics that aim to improve player motivation in exergaming settings.
Total-Knee-Replacement (TKR) is becoming a prevalent procedure for the treatment of knee osteoarthritis among older adults worldwide. A key to the success of TKR is an effective post-surgical rehabilitation. However, for TKR patients, pain is a major factor that hinders them from taking up the rehabilitation exercise fully and leads to unsatisfactory functional recovery rate. To help TKR patients cope with pain and adhere more to rehabilitation exercises, we design a gamified rehabilitation tool, Fun-Knee, for effective pain distraction based on the "Peak-End Effect" theory. We have developed the alpha version of Fun-Knee and conducted an acceptability and usability test of its hardware and software. The results from a focus group show the general acceptance of end users on Fun-Knee.
Food Literacy (FL) is associated with the improvement of autonomy and confidence around food, healthier dietary intake, and chronic disease prevention. However, to date, behaviour change research at CHI has focused on motivating healthy eating mainly through weight loss and calorie control, which can lead to poor nutritional choices as consumers optimize caloric intake over a balanced diet. To address this gap, we designed a mobile game called Pirate Bri's Grocery Adventure, that seeks to improve FL through a situated learning approach to grocery shopping. Our game leverages Self Determination Theory (SDT) to build a player's competence, autonomy, and relatedness as shoppers are encouraged to develop an understanding of the nutritional benefits of foods and are rewarded for balancing sugar, sodium, fats and fibre in their purchases.
The general promise of employing the motivational power of games for serious purposes, such as performing physiotherapy exercises, is well-established. However, game user research discusses both the approach of gamification, i.e. adding game-elements on to a task-focused application and of serious games, i.e. injecting task-focused elements into a more fully-fledged game. There is a surprising lack of empirical work that contrasts both approaches. We present both a casually gamified application and a serious game with purpose-driven mechanics that provide different frontends to the same underlying digital health application. This application aims at supporting physiotherapy sessions for chronic lower-back afflictions. Results from an explorative pre-study contrasting both approaches indicate a clear preference for the serious game version, capturing higher perceived motivational components (autonomy and relatedness), as well as higher immersion and flow relative to the gamified version.
We assess the effects of colour-blindness on casual computer gaming performance and experience. Participants played an online puzzle game in both an unmodified environment and in a colour-blind simulation. Some of our results suggest that the colour-blind filter did not negatively impact player performance. However, users perceived the game as more difficult and their performance as limited. These contradicting results highlight the importance of further understanding the impact of colour vision deficiency on performance, and the accessibility principles used in game design when colour coding operable cues.
Gamification design is a complex process. Existing gameful design methods generally focus on high level motivational considerations. In order to provide designers with the tools to create meaningful and motivating game elements, we propose a design space that encapsulates lower-level design decisions, such as visual and operational aspects, during the design process. We also propose a set of design cards and a board that aim to support the design process for collaborative design sessions.
Little effort has been devoted to studying the emotional experience of designers over time. Using sentiment analysis, we explore a unique corpus of designers' written reflections on 15 different design processes. We investigate how positive and negative sentiment in the reflections change over the course of a three-month design project. Our findings indicate that change in sentiment is not attributable to time alone, but rather to different phases and methods employed by the design teams. Finally, we discuss implications and future avenues for both our results and for using sentiment analysis in HCI research.
Personas are widely used in software development, system design, and HCI studies. Yet, their evaluation is difficult, and there are no recognized and validated measurement scales to date. To improve this condition, this research develops a persona perception scale based on reviewing relevant literature. We validate the scale through a pilot study with 19 participants, each evaluating three personas (57 evaluations in total). This is the first reported effort to systematically develop and validate an instrument for persona perception measurement. We find the constructs and items of the scale perform well, with factor loadings ranging between 0.60 and 0.95. Reliability, measured as Cronbach's Alpha, is also satisfactory, encouraging us to pursue the use of the scale with a larger sample in future work.
Puppets are often used in storytelling, but there are few studies about puppets regarding the storytelling experience. In this paper, we introduce the concept of an ideal puppet for storytelling and discuss directions for puppet development. The ideal puppet is able to automatically animate itself in line with a story plot and positively influence the interactions in the storytelling dynamic. To see how children and parents would accept the concept, we created a preliminary prototype and conducted user study using Wizard-of-Oz method. Participants experienced enhanced immersion and increased communication between them by through the automatic movement of the puppet. They expected various roles from the puppet such as actor, support tool, and friend, which made various usage scenarios possible. The puppet should be developed in the direction of enhancing its advantages and including various usage scenarios, especially by combining the needs of both automation and manipulation.
Water is an essential nutrient for human health. However, individuals may ignore drinking enough water due to the rush of everyday life. We present Grow, a conceptual smart bottle prototype designed to encourage users to drink water regularly. Our concept utilizes bottle surface as an ambient display instead of a traditional screen-based display to give feedback. Grow tracks daily water intake through an embedded liquid level sensor. It gives positive, abstract, non-intrusive and aesthetic feedback through heating up different parts of a thermo-chromic print on its surface (a tree image). We also present the results of a user study exploring 10 prospective users' reactions to Grow as well as their expectations of smart water bottles in general.
Chinese Paper-cut is an ancient folk art being thought to have originated in the 6th century. In traditional paper-cut, it is necessary for workers or amateurs who have fertile imagination and professional cutting skills to achieve good visual enjoyment. However, this posed a challenge in pattern design and a dimensional imagination barrier in fabrication, it also requires sufficient training in cutting patterns on paper. Motivated by these issues, we propose a novel system to help generate paper-cut pattern and give the fabricating instruction with digital design and interactive technology. Our system is expected to spark the creativity of paper-cut pattern for designers, lower the fabricated barrier for novices, and promote the development and application of paper-cut.
Technology is being extensively used among children with autism spectrum disorder (ASD) in affluent countries. However, there is a paucity of studies exploring the use of technology for children with ASD in developing countries like Sri Lanka. Therefore, we explore the key considerations for designing software applications for children with ASD by interviewing 32 parents and 18 practitioners of children with ASD. Our findings suggest that to effectively support children with ASD in Sri Lanka, technological interventions should focus on multi-player functionalities to promote social interactions. We further identified the characteristics that are required in a software application to effectively support children with ASD in Sri Lanka, namely support for 1) timers to avoid screen addiction, 2) tangible user interfaces, 3) customization, and 4) cultural context. Based on the results, we provide recommendations to design technology for Sri Lankan children with ASD.
The near-ubiquitous usage of sticky notes is rarely the object of examination or research in itself. This paper details the initial findings of a structured survey and analysis of existing research literature mentioning sticky note usage. The main contributions of the paper are: An overview of our findings so far, which includes characterizations of seven categories of sticky note usage, reported in two related research communities - the Association for Computing Machinery (ACM) and the Design Research Society (DRS) - with accompanying highlighted examples; a discussion of the results alongside suggestions for questions worth pursuing in future research, based on our findings.
Research on physical representations of data has often used personal data as its focus. Core aim of making personal data physical is to provoke self-reflections through a felt experience. In this paper, we present a preliminary study which employs the idea of gift-giving as a means to explore one's online data. Our main findings report strategies of relating to the data of strangers as well as a conflict between what one thought of their online self and what others were able to find. We discuss how the gifts became platforms for self-reflection, similar to physical data models. Then, we connect that to the engagement of a third person (gift-giver) in the process, highlighting the potential of such involvement. In the future, we focus on how to link people's meaningful artifacts with their personal data.
There is increasing interest in the role that ethics plays in UX practice, however current guidance is largely driven by formalized frameworks and does not adequately describe "on the ground" practitioner conversations regarding ethics. In this late-breaking work, we identified and described conversations about a specific ethical phenomenon on Twitter using the hashtag #darkpatterns. We then determined the authors of these tweets and analyzed the types of artifacts or links they shared. We found that UX practitioners were most likely to share tweets with this hashtag, and that a majority of tweets either mentioned an artifact or "shames" an organization that engages in manipulative UX practices. We identify implications for building an enhanced understanding of pragmatist ethics from a practitioner perspective.
Health-related IT products (e.g., Fitbit) support persuasive technologies to reinforce an individual's desired behaviors. While these products are dedicated to certain health behaviors, such as walking or specific types of sports, IoT at home can be integrated more broadly throughout one's daily life. To address this opportunity, this paper aims to shed light on the use of domestic IoT that can foster changes toward healthy behaviors through a 3-week explorative field trial. This paper reports two major goals of health-promoting technologies using IoT as well as different persuasive techniques according to the temporal phases of before, during, and after the health behaviors.
This article presents the results of a preliminary qualitative survey concerning user-interface (UI) designers' awareness of, and techniques for addressing, non-algorithmic online bias and discrimination. The results suggest that 1) online bias and discrimination are not being widely considered in professional UI development, 2) strategies currently being used to address bias are not evidence-based and therefore their efficacy is unknown, and accordingly, 3) there is a need for more evidence-based UI design strategies to address online bias.
Audience feedback is a valuable asset in many domains such as arts, education, and marketing. Artists can receive feedback on the experiences created through their performances. Similarly, teachers can receive feedback from students on the understandability of their course content. There are various methods to collect explicit feedback (e.g., questionnaires) - yet they usually impose a burden to the audience. Advances in physiological sensing opens up opportunities for collecting feedback implicitly. This creates unexplored dimensions in the design space of audience sensing. In this work, we chart a comprehensive design space for audience sensing based on a literature and market review which aims to support the designers' process for creating novel feedback systems.
We describe a process of materials inquiry that gave rise to a new kind of sensor: a string figure sensor that correlates resistance changes with the topology of a closed loop of string. We describe the critical and reflective process from which our string figure sensor emerged, how the sensor works, and the future applications we envision our sensor supporting.
ThinkInk is an intelligent sketch-based tutoring tool for learning data structures. Our initial evaluation with 45 students shows that they find the tool engaging, fun and a good learning experience. This paper focuses on the interaction design and software engineering required to build such a tool.
Despite a growing body of research about the design and use of conversational agents, existing work has almost exclusively focused on interactions between an agent and a human. Less is known about how an agent is perceived and used during human-human conversation. We compared conversations between dyads using AI-assisted and standard messaging apps and elicited qualitative feedback from users of the AI-assisted messaging app through interviews. We find discrepancies between the AI assistant's suggestions and the conversational content, which is also reflected in participant interviews. Our results are used to suggest some areas for improvement and future work in AI-assisted communication.
In the time of highly automated driving, the role of drivers is shifting from actual driving to being a passenger, supervisor, or a cooperative partner. In the case of cooperative driver-vehicle interaction, new interfaces have to be developed, for instance to approve maneuvers. We implemented two different interaction techniques (clicking and holding down a button) to approve an overtaking maneuver on rural roads. We conducted a driving simulator study to investigate the effects of the interaction techniques on a touchscreen regarding usability. Our results suggest that a simple click provides better usability. Finally, we highlight future research directions that should be considered for the design of such interfaces for cooperative driver-vehicle interaction.
This paper investigates the effect of vibro-kinetic (VK) technology on psychophysiological states of users in a virtual reality context. Specifically, we investigate whether a VK seat, i.e., a seat using movement and vibration synchronized with a given media, induces psychophysiological states aligned with an optimal immersive virtual reality (VR) experience. We test our hypotheses with subjects in a seated position while experiencing a passive vehicular movement with a VR headset. Using a between-subject experiment, 45 participants were randomly assigned to a VK or a non-VK condition. Users' psychophysiological states were measured using electrodermal activity, heart rate, and user perceptions. We find evidence that VK significantly enhances the physiological activation of the user throughout the experience. Also, we find that VK seems to create a psychological state that requires less conscious autoregulation, which could suggest that users experience less cybersickness in this condition.
In this paper, we present two case studies describing how two organizations practiced User Experience (UX) in the summer of 2017; both were 'in-house' departments in consumer-facing companies in the Chicagoland area of Illinois. We conducted 16 interviews (involving 22 people) with leadership and practitioners, and observations (job-shadowing) with 14 of those we interviewed. Key takeaways included: (a) practitioners came from a variety of backgrounds resulting in multidisciplinary teams; (b) leadership desired UX employees that were generalists; and (c) inexpensive tools designed for UX were common for creating artifacts and facilitating communication resulting in a dynamic tool-scape. These findings have implications for instructors teaching in UX and students in UX programs; we also argue the findings will interest UX practitioners who are curious about sharing and learning from each other.
Researchers who use remote unmoderated usability testing services often rely on panels of on-demand participants. In this exploratory study, we wanted to better understand the effects of participants completing many usability studies over time, particularly how this experience may shape the content and quality of user feedback. We present a user study with 15 diverse "professional participants" and discuss some of the panel conditioning effects described by participants.
Virtual Reality (VR) devices have increasingly sparked both commercial and academic interest. While applications range from immersive games to real-world simulations, little attention has been given to the display of text in virtual environments. Since reading remains to be a crucial activity to consume information in the real and digital world, we set out to investigate user interfaces for reading in VR. To explore comfortable reading settings, we conducted a user study with 18 participants focusing on parameters, such as text size, convergence, as well as view box dimensions and positioning. This paper presents the first step in our work towards guidelines for effectively displaying text in VR.
Users' cognitive load while interacting with a system is a valuable metric for evaluations in HCI. We encourage the analysis of eye movements as an unobtrusive and widely available way to measure cognitive load. In this paper, we report initial findings from a user study with 26 participants working on three visual search tasks that represent different levels of difficulty. Also, we linearly increased the cognitive demand while solving the tasks. This allowed us to analyze the reaction of individual eye movements to different levels of task difficulty. Our results show how pupil dilation, blink rate, and the number of fixations and saccades per second individually react to changes in cognitive activity. We discuss how these measurements could be combined in future work to allow for a comprehensive investigation of cognitive load in interactive settings.
Auditory user interfaces (AUIs) have been developed to support data exploration, increase engagement with arts and entertainment, and provide an alternative to visual interfaces. Standard measures of usability such as the SUS [4] and UMUX [8] can help with comparing baseline usability and user experience (UX), but the overly general nature of the questions can be confusing to users and can present problems in interpretation of the measures when evaluating an AUI. We present an efficient and effective alternative: an 11-item Auditory Interface UX Scale (BUZZ), designed to evaluate interpretation, meaning, and enjoyment of an AUI.
We report findings and implications from a semi-naturalistic user study of a system for Automatic Persona Generation (APG) using large-scale audience data of an organization's social media channels conducted at the workplace of a major international corporation. Thirteen participants from a range of positions within the company engaged with the system in a use case scenario. We employed a variety of data collection methods, including mouse tracking and survey data, analyzing the data with a mixed method approach. Results show that having an interactive system may aid in keeping personas at the forefront while making customer-centric decisions and indicate that data-driven personas fulfill information needs of decision makers by mixing personas and numerical data. The findings have implications for the design of persona systems and the use of online analytics data to better understand users and customers.
Artificial intelligence (AI) based recommendation agents (RA) can help managers make better decisions by processing a large quantity of decision relevant information. Research on user-RA interactions show that users benefit from RA, but that there are some challenges to their adoption. For instance, RA adoption can only happen if users trust the RA. Thus, this study investigates how the richness of the information provided by an RA and the effort necessary to reach this information influence users' perceptions and usage. A within-subject lab experiment was conducted with 20 participants. Results suggest that perceptions toward the RA (trust, credibility, and satisfaction) are influenced by the RA information richness, but not by the effort needed to reach this information. In addition to contributing to HCI literature, the findings have implications for the design of better AI-based RA systems.
Interacting with computer generated humans in virtual reality is becoming more popular with the current increase in accessibility of virtual reality head mounted displays and applications. However, simulating accurate behavior in computer generated humans remains a challenge. In this study, we tested the effects of full body behavior (Body Movement and Head Movement) in terms of viewers perception (comfortability with and realism of the computer generated human) using an animated computer generated human in virtual reality. Our findings imply the significant influence of body animation (excluding head animation) on both comfortability and realism of the computer generated human. 37.5% of the participants did not notice the exclusion of the head animation; implying the importance of body animations over head animations. Using the results, we derive guidelines on computer generated human design and realization as well as their influence on the viewer's perception. Finally, we discuss the constraints that should be taken into account when animating in virtual reality.
We describe a method and a prototype implementation for filtering shared social data (e.g., 360 video) in a wearable Augmented Reality (e.g., HoloLens) application. The data filtering is based on user-viewer relationships. For example, when sharing a 360 video, if the user has an intimate relationship with the viewer, then full fidelity (i.e. the 360 video) of the user's environment is visible. But if the two are strangers then only a snapshot image is shared. By varying the fidelity of the shared content, the viewer is able to focus more on the data shared by their close relations and differentiate this from other content. Also, the approach enables the sharing-user to have more control over the fidelity of the content shared with their contacts for privacy.
Cyborgs are human-machine hybrids with organic and mechatronic body parts. Like humans, cyborgs may use their additional body parts for physical tasks and communication. In this study, we investigate how additional arms can be used to communicate. While using additional arms to perform physical tasks has been researched, using them to communicate is an area that is largely unexplored. Our study is divided into three stages: a pilot study, implementation, and a user study. In this paper, we discuss our efforts as related to the first two stages of our study. The pilot study was used to determine user expectations for the arms. Participants found the arms effective for describing an area from a fixed location. Users also preferred additional arms that can be controlled and are physically similar to their existing arms. Our prototype consists of a virtual mirror that augments the user's body with additional arms. We discuss future directions for improving our implementation and outline a plan for the user study.
We present Codestrate Packages, a package-based system to create extensible software within Codestrates. Codestrate Packages turns content creation from an application-centric model into a document-centric model. Codestrate Packages no longer restrict users to the feature set of the application. Instead packages allow users to add new features to their documents while already working on them. They can match the features to their current task at hand. Supporting the reprogrammable nature of Codestrates, new features can also be implemented by users themselves and shared with other people without having to leave the document. We illustrate the application of Codestrate Packages in an example scenario and present its technical concepts. We plan to conduct multiple user studies to investigate the benefits and barriers of Codestrate Packages' document-centric approach.
Statistical analysis is a frequent task in several research fields such as HCI, Psychology, and Medicine. Performing statistical analysis using traditional textual programming languages like R is considered to have several advantages over GUI applications like SPSS. However, our examination of 40 analysis scripts written using current IDEs for R shows that such scripts are hard to understand and maintain, limiting their replication. We present StatWire, an IDE for R that closely integrates the traditional text-based editor with a visual data flow editor to better support statistical programming. A preliminary evaluation with four R users indicates that this hybrid approach could result in statistical programming that is more understandable and efficient.
Nowadays, end users can customize their technological devices and web applications by means of trigger-action rules, defined through End-User Development (EUD) tools. However, debugging capabilities are important missing features in these tools that liMassachusetts Institute of Technology their large-scale adoption. Problems in trigger-action rules, in fact, can lead to unpredictable behaviors and security issues, e.g., a door that is unexpectedly unlocked. In this paper, we present a novel debugging approach for trigger-action programming. The goal is to assists end users during the composition of trigger-action rules by: a) highlighting possible problems that the rules may generate, and b) allowing their step-by-step simulation. The approach, based on Semantic Web and Petri Nets, has been implemented in a EUD tool, and it has been preliminary evaluated in a user study with 6 participants. Results provide evidence that the tool is usable, and it helps users in understanding and identifying problems in trigger-action rules.
In the conceptual design phase, designers routinely generate dozens of alternatives based on a single idea. This is especially relevant in generative design where an algorithm can generate a large number of viable design options. While solutions for creating and managing a small number of simple alternatives have been proposed, practical applications of these solutions are liMassachusetts Institute of Technologyed. As a result, we present GEM-NI+, an extension to the original GEM-NI system for creating and managing alternatives in generative design. GEM-NI+ is designed to enable editing, managing and comparing up to 24 alternatives simultaneously using a multi-monitor setup. GEM-NI+ also features a new "jamming spaces" technique for assigning individual monitors into different visualization states, which makes organization of a large workspace easier. Finally, GEM-NI+ enables comparison of complex alternatives using recursive group node difference visualization.
During the process of design, copies of files are often stored to track changes and design development or to ensure that previous work will not be lost. In software design field, such process is supported using versioning system, where source code is saved intermittently when features are added or modified for individual or group use. We argue that similar versioning system will also benefit the design community when applied to 3D design files, to see how their designs progress and collaborate. In this paper we outline a implemented web based open ecosystem allows designers to similarly collaborate but with a lower bar for adoption than comparable software versioning system. Our system is to be applied to a classroom setting, where architecture students learn to make structural designs; they are then able to see, modify, and give feedback to each other's work.
This paper describes Toccata, an activity-centric system for classroom orchestration. Through preliminary fieldwork, we identified three main challenges in the management of digital activities in classrooms: 1. Poor infrastructures lead to breakdowns in the activity 2. Activity structures are idiosyncratic (they vary widely and are rarely shared); and 3. Orchestration is difficult because teachers lack an overview of the unfolding activities. We developed Toccata to support activity scripting and orchestration in situations with unreliable connectivity. Based on a preliminary trial, we outline directions for activity-centric orchestration systems: 1. Focus on timing; 2. Provide several levels of awareness; 3. Support activity suspend and resume in changing contexts.
Colour plays a critical role in the design of artistic works and visual media. Existing colour management tools are liMassachusetts Institute of Technologyed in their ability to support generation and manipulation of multiple palette variations. We present Chromotype, a generative design utility for the creation of colour schemes based on existing palettes, user-defined colour sets, and reference images. Chromotype applies aspects of colour theory, random variation, and image sampling to create multiple palettes according to user-specified settings. The tool facilitates exploration of alternative palettes through a gallery interface, allowing designers to quickly create, manipulate, and save a variety of colour schemes for use in their work.
Additive manufacturing, the academic term of 3D-Printing, is the way of fabricating new objects by adding materials layer-by-layer. Our team extended this technology to the next level enabling users to add materials directly onto ready-made (3D-printed) objects. Therefore, we named our technology as 'Additive-Additive-Manufacturing'. While 'Additive-Additive-Manufacturing' is one of general-purpose technologies, we believe that one of the killer-applications using this technology is to customize and modify body parts of personal pet robots. In this paper, we report our concept, software and hardware technologies of additive-additive-manufacturing, and prototypes of a growable pet robot and its potentials.
Human skin is the largest organ on our body not only senses and external environment. A growing number of researchers devote themselves to design seamless interfaces directly on skin. In this late-breaking work, we propose a novel way for creating dynamic 2.5D skin textures, called SKIN+, a soft fluidic mini-scale user interface by introducing fluidic actuation. We have created four swatches with different pre-defined textures, topologies and structures to explore how this fluidic actuation system can benefit on-skin experiences and interactions. Our work details the intriguing experiences and interactions and future applications of on-skin wearables. Our work also extends the expressiveness, aesthetics and design space of soft fluidic interface as skin decoration and beauty technology.
A prototype process of a physical user interface includes not only connection of electronic parts and an enclosure design, but also the arrangement and configuration of the electronic parts in the enclosure. This process is complicated and difficult for people who do not have modeling skills. We propose ProtoHole, a system for prototyping interactive 3D printed objects using holes, internal cavity and swept-frequency acoustic sensing. By emitting a high-frequency sweep signal inside the object, our system enables to classify changes in resonance properties when closing holes using a machine learning technique. Therefore, an object can be easily made interactive without considering an arrangement of internal electronic parts and wiring. We show examples of prototypes created with our system.
The growing makers' community demands better supports for designing and fabricating interactive functional objects. Most of the current approaches focus on embedding desired functions within new objects. Instead, we advocate re-purposing existing objects and authoring interactive functions onto them. We present Plain2Fun, a design and fabrication pipeline enabling users to quickly transform ordinary objects into interactive and functional ones. Plain2Fun allows users to directly design the circuit layouts onto the surfaces of the scanned 3D model of existing objects. Our design tool automatically generates as short as possible circuit paths between any two points while avoiding intersections. Further, we build a digital machine to construct the conductive paths accurately. With a specially designed housing base, users can simply snap the electronic components onto the surfaces and obtain working physical prototypes. Moreover, we evaluate the usability of our system with multiple use cases.
High latency in an interactive system liMassachusetts Institute of Technologys its usability. In order to reduce end-to-end latency of such systems, it is necessary to analyze and optimize the latency of individual contributors, such as input devices, applica-tions, or displays. We present a simple tool for measur-ing the latency of USB-connected input devices with sub-millisecond accuracy. The tool, based on a Rasp-berry Pi 2 microcomputer, repeatedly toggles a button of a game controller, mouse, or keyboard via an opto-coupler soldered to the button and measures the time until the input event arrives. This helps researchers, developers and users to identify and characterize sources of input lag. An initial comparison of multiple input devices shows differences not only in average latency but also in its variance.
Buttons are the most commonly used input devices. So far the goal of the designers was to provide a passive button that can accept user input as easily as possible. Therefore, based on Fitts' law, they maximize the size of the button and make the distance closer. This paper proposes Button++, a novel method to design smart buttons that actively judge user's movement risk and selectively trigger input. Based on the latest model of moving target selection, Button++ tracks the user's submovement just before the click and infers the expected error rate that can occur if the user repeatedly clicks with the same movement. This allows designers to make buttons that actively respond to the amount of risk in the user's input movement.
Distributed systems are very complex and in case of errors hard to debug. The high number of messages with non deterministic delivery timings, as well as message losses, data corruption and node crashes cannot be efficiently analyzed with traditional GUI tools. We propose to use immersive technologies in a multi-display environment to tackle these shortcomings. Our DebugAR approach shows a representation of the current systems state, message provenance, and the lifetime of participating nodes and offers layouting techniques. By providing a screen that shows a traditional text-log, we bridge the gap to conventional tools. Additionally, we propose an interactive 3D visualization of the message flow, combining an interactive tabletop with augmented reality using a head-mounted display. We are confident that our proposed solution can not only be used to analyze distributed system, but also for other time-dependent networks.
We identify difficulties and opportunities in using virtual reality on passenger boats. As part of a larger ecotourism study, we experimented with the use of virtual reality (VR) on a boat using a head mounted display. This paper gives an overview of the causes and current knowledge of motion sickness, and details our experience of using VR on a catamaran. While there may be some liMassachusetts Institute of Technologyations and challenges to overcome related to current hardware offerings, there are also opportunities in minimizing seasickness through visual distraction that are worth investigating further.
We present a low-cost tracking technique for mobile virtual reality (VR) head-mounted displays (HMDs) that uses retro-reflective markers.Our technique allows the built-in cameras of existing smartphones to track real objects even if they are moving rapidly. The user simply attaches retro-reflective markers to the objects or creates unique objects (easily fabricated by 3D printing) and uses them as input devices for smartphone-based HMD applications. Several VR game applications that use our technique are described and testing shows that our technique works effectively.
This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.
Euclidean distance is traditionally used to compare a gesture candidate against gesture templates in two-dimensional gesture recognizers. This paper compares two distances borrowed from other domains of computer science used in template-based two-dimensional gesture recognizers: the Mahalanobis distance, typically used in computer vision and statistics, and the Jaro-Winckler distance, typically used in information retrieval and pattern recognition. Although the geometric interpretation of these distances is less straightforward for designers, there is a significant impact of the Mahalanobis distance on recognition rate, but not for the Jaro-Winkler one.
We investigate mobile phone pointing in Spatial Augmented Reality (SAR). Three pointing methods are compared, raycasting, viewport, and tangible (i.e. direct contact), using a five-projector "full" SAR environment with targets distributed on varying surfaces. Participants were permitted free movement in the environment to create realistic variations in target occlusion and target incident angle. Our results show raycast is fastest for high and distant targets, tangible is fastest for targets in close proximity to the user, and viewport performance is in between.
We present GLATUI, a system that non-intrusively augments the physical interaction between humans and everyday objects with rich expressions extracted directly from the motion. The proposed system measures the velocities on the human skin or the surface of any object and augments them into a user interface without any pre-instrumentation or assumption on the appearance. Specifically, the acquisition of three basic motion elements, i.e., rigid-body motion, deformation, and vibration were investigated. These elements enables natural and high degrees of freedom interactions. In the proposed system, the three motion elements are simultaneously estimated from on-surface velocities, which are measured using a galvanoscopic scanning laser Doppler vibrometer. A method for the measurement of the rigid-body motion, deformation, and vibration is proposed. In addition, we demonstrate several applications of the proposed system, including an on-hand UI with hand gestures, a vibration-based object recognition application, and a conceptual tangible UI. The recognition rate and context awareness of the proposed method in these applications were also evaluated.
The recognition process of self-attribution, which is mainly caused by congruence between visual and proprioceptive information and between visual information and prediction from motor commands, has been extensively studied. However, it is still unclear as to which congruence plays the primary role in the process during the voluntary movements. We conducted a user study that distinguishes proprioceptive information and prediction from motor commands by displaying the modified images of the participants' hands in various rotation angles; this introduced the conflict between visual and proprioceptive information. The hand motions of the participants were restricted so that they could predict the visual motion of the images of their hands by the motor command even while the images were rotated. The result indicates that motion prediction plays a primary role in the recognition process of self-attribution, and this predictability depends on the motion pattern and appearance of the hand images.
Interactive systems with dynamic content are becoming ubiquitous nowadays. However, it is challenging to select small and fast-moving targets in such environment. We present 2D-BayesPointer, a novel interaction technique to assist moving target selection in 2D space. Compared with previous techniques, our method provides implicit support without modifying the original interface design. Moreover, the algorithmic parameters are determined by probabilistic modeling of human performance in moving target selection tasks. The preliminary results from a pilot study have shown that this technique can significantly improve both selection speed and accuracy.
Previous work has documented that limitations of current stereo display systems affect depth perception. We performed an experiment to understand if such stereo display deficiencies affect 3D pointing for targets in front of a screen and close to the user, i.e., in peri-personal space. Our experiment compares isolated movements with and without a change in visual depth for virtual targets. Results indicate that selecting targets along the depth axis is slower and has less throughput than laterally positioned targets.
We created a mobile asymmetric collaboration system that allows a remote expert to assist a nomadic operative in a maintenance task. We explored the use of a virtual reality headset and several cameras and controls that (1) are easy to setup by the operative on the site, (2) enable the expert to better understand the situation on the site (be more immersed) and (3) offer the expert various flexible interface options to study the environment and to act on the devices on the site independent of the operative. The preliminary results indicate that the expert appreciates the opportunity for 360 degreeview and the flexibility of using detailed controls on a close-up camera view.
With the growing popularity of social media, many person-to-person interactions happen in the virtual space using various metaphors like emojis, tweets, and likes. In this late breaking work, we aim to map such metaphors from the virtual world into the physical world and understand the interactions that follow. We have proposed and prototyped a wearable that is a tangible representation of two social media metaphors: poke and categorizing people as friends, acquaintances, and strangers. These categories have been represented using touch patches and the interactions are visualized on the wearable.
Here we present a social wearables prototype, i.e. a wearable that augments collocated social interaction: the Lågom. This design is meant to support people to be aware of and better regulate their verbal participation in group discussions. Lågom takes the shape of a colorful, bulky and funny looking flower that senses the wearer's speaking and responds with haptic and visual feedback. We ran a pilot study with nine people participating in a class discussion. Preliminary results show potential of the haptic feedback to increase self awareness of participation, and to help people better regulate their participation in group discussions.
A common way to display information via tactile output is to use vibration motors. However, vibration is often perceived as a rather artificial, even unpleasant cue. We explore a novel method based on pneumatic cues to provide a more natural tactile output. We use two airbags positioned at the back of the user, at shoulder height to give navigational cues. We utilize the shoulder tap metaphor to give directions to the left, right or ahead. We compare the pneumatic cue to vibro-tactile cue at the same position. Our Results show, that the pneumatic cue was rated as significantly less urgent than vibro-tactile cue. As there were no significant differences in error rate, annoyance and usability, we rate ShoulderTap as eligible alternative to vibro-tactile cues.
In this paper, we realize a wearable tactile device that delivers smooth pleasant strokes, those resemble of caressing and calming sensations, on the forearm. In Study 1, we develop a psychophysical model of continuous illusory motion on a discrete vibrotactile array. We use this model to generate a variety of tactile strokes that vary in frequency (quality), amplitude (strength) and duration (speed), and test them on a hedonic scale of pleasant-unpleasant in Study 2. Our results show that low frequency (<40 Hz) strokes at low amplitude (light touch) are felt pleasant, while high frequency strokes are unpleasant. High amplitude strokes are biased towards unpleasantness. Our results are useful for artificial means to enhance social presence among individuals in virtual and augmented settings.
Vision and audition are widely used in human-computer social interaction. Even though highly performant tactile sensors are available on the market, very few social interfaces exploit haptics as a sensing modality. Touch is a fundamental channel to communicate complex social feeling. Recent haptic studies show up changes of paradigm in the design of social touch interfaces. In this research, we investigate the design of a new compliant electronic skin (e-skin) based on piezoresitive carbon nanotube (CNT) films to facilitate interaction. CNT e-skin uses tiny mechanical sensors, fabricated at low cost. Because of their 3D structure, CNT mechanical response can be easily adjusted accordingly to the application. Here, we present the multimodality of the CNT e-skin. Inspired by human somatosensory system, this compliant CNT e-skin is sensitive to the pressure, temperature and moisture. The first integration on a robotic platform is on-going work, with the objective of using it as social interaction modality.
Today, public displays are used to display general purpose information or advertisements in many public and urban spaces. In addition to that, research identified novel application scenarios for public displays. These scenarios, however, mainly include gesture- and posture-based interaction mainly relying on optical tracking. Deploying optical tracking systems in the real world is not always possible since real-world deployments have to tackle several challenges. These challenges include changing light conditions or privacy concerns. In this paper, we explore how smart fabric can detect the user's posture. We particularly focus on the user's arm posture and how this can be used for interacting with public displays. We conduct a preliminary study to record different arm postures, create a model to detect arm postures. Finally, we conduct an evaluation study using a simple game that uses the arm posture as input. We show that smart textiles are suitable to detect arm postures and feasible for this type of application scenarios.
Research has identified benefits of large high-resolution displays (LHRDs) for exploring and understanding visual information. However, these displays are still not commonplace in work environments. Control rooms are one of the rare cases where LHRD workplaces are used in practice. To understand the challenges in developing LHRD workplaces, we conducted a contextual inquiry a public transport control room. In this work, we present the physical arrangement of the control room workplaces and describe work routines with a focus on the interaction with visually displayed content. While staff members stated that they would prefer to use even more display space, we identified critical challenges for input on LHRDs and designing graphical user interfaces (GUIs) for LHRDs.
This paper proposes macro development and execution environment (Kawaluü), which uses tangible objects to adapt to the intentions of each user during multi-user access of the tabletop interface. Within the Kawaluü environment, the following are possible. 1) Macro developers are able to program parameterized macros corresponding to tangible objects, so that users can change the parameters during macro use. 2) Users can directly change the parameters during macro use. Through these achievements, we can enhance multi-user collaboration in a large-screen environment. We developed a macro development and execution environment that fulfills conditions 1) development and 2) execution and applied it to the tabletop interface. We conducted qualitative evaluations on 1) development and 2) execution, and quantitative evaluations on 2) execution. The study suggested the usefulness of the environment in scenarios where multiple users use a tabletop interface with many kinds of macros.
We present CuffLink, a wristband designed to let users transfer files between devices intuitively using grab and release hand gestures. We propose to use ultrasonic transceivers to enable device selection through pointing and employ force-sensitive resistors (FSRs) to detect simple hand gestures. Our prototype demonstration of CuffLink shows that the system can successfully transfer files between two computers using gestures. Preliminary testing with users shows that 83% claim they would use a fully working device over typical sharing methods such as Dropbox and Google Drive. Apart from file sharing, we intend to make CuffLink a re-programmable wearable in future.
A security measure to thwart a guess attack on a user authentication is required, because this type of attack can more efficiently identify a victim's password than a brute- force attack, and it has been a practical threat to the user authentication. In this work, we propose a novel user authentication that could be an effective countermeasure against a guess attack by "word-pair" based credential. We also implemented a prototype system, and conducted evaluation experiments using the prototype system. We discuss the results of the evaluations show that the system is promising as a security measure against the guess attacks.
Biometric authentication offers promise for mobile security, but its adoption can be controversial, both from a usability and security perspective. We describe a preliminary study, comparing recollections of biometric adoption by computer security experts and non-experts collected in semi-structured interviews. Initial decisions and thought processes around biometric adoption were recalled, as well as changes in those views over time. These findings should serve to better inform security education across differing levels of technical experience. Preliminary findings indicate that both user groups were influenced by similar sources of information; however, expert users differed in having more professional requirements affecting choices (e.g., BYOD). Furthermore, experts often added biometric authentication methods opportunistically during device updates, despite describing higher security concern and caution. Non-experts struggled with the setting up fingerprint biometrics, leading to poor adoption. Further interviews are still being conducted.
Scarf plots visualize gaze transitions among areas of interest (AOIs) on timelines. Nevertheless, scarf plots are ineffective when there are many AOIs. To help analysts explore long temporal patterns, we present Alpscarf, an extension of scarf plots with mountains and valleys to visualize order-conformity and revisits. Alpscarfs are rendered in two complementary modes in aid of insight discovery. An R package of Alpscarf is available at github.com/chia-kaiyang/alpscarf.
The increasing use of unmanned aerial vehicles (i.e. drones) is raising a number of privacy issues. The high-resolution cameras that drones can carry may pose serious privacy hazards to the general public. Numerous media stories highlight incidents of drones spying on people, often with negative consequences (e.g. lawsuits and arrests). Our research seeks to understand how incorporating privacy-preserving technologies in drone design can mitigate privacy fears. We identify several classes of privacy-preserving technological solutions and evaluate their potential efficacy across a variety of situations through a Mechanical Turk survey of 7,200 respondents. Our results inform both drone designers and regulators who wish to develop, operate, and/or regulate novel drone services while assuaging public fears of privacy violations.
Following the 2017 Equifax data breach, we conducted four preliminary interviews to investigate how consumers view credit bureaus and the information flows around these agencies, what they perceive as risks of the Equifax breach, and how they reacted in practice. We found that although participants could properly articulate the purpose of credit bureaus, their understanding of credit bureaus' data collection practices was divided and incomplete. Although most of them conceptualized identity theft as the primary risk of data breaches disclosing credit information, and noted a lack of trust/self-efficacy in controlling their data collected by credit bureaus, they did not take sufficient protective actions to deal with the perceived risks. Our findings provide implications for the design of future security-enhancing tools regarding credit data, education and public policy, with the aim to empower consumers to better manage their sensitive data and protect themselves from future data breaches.
As online digital labor platforms grow in popularity, research is needed to understand how workers navigate the unique privacy concerns that emerge during their work. We surveyed 82 Amazon Mechanical Turk (MTurk) workers about how they make decisions about revealing personal data while doing tasks. We find that many comply with privacy-invasive information requests for a range of reasons, including benefits outweighing costs, fears of losing access to work, and contributing to scientific research. Several workers also engage in privacy-protective behaviors, motivated by perceptions that the information request is unnecessary or violates policies, as well as concerns about data use and identification. We discuss how our findings can inform policy and design to protect workers' privacy.
Many smartphone security mechanisms prompt users to decide on sensitive resource requests. This approach fails if corresponding implications are not understood. Prior work identified ineffective user interfaces as a cause for insufficient comprehension and proposed augmented dialogs. We hypothesize that, prior to interface-design, efficient security dialogs require an underlying permission model based on user demands. We believe, that only an implementation which corresponds to users» mental models, in terms of the handling, granularity and grouping of permission requests, allows for informed decisions. In this work, we propose a study design which leverages materialization for the extraction of the mental models. We present preliminary results of three Focus Groups. The findings indicate that the materialization provided sufficient support for non-experts to understand and discuss this complex topic. In addition to this, the results indicate that current permission approaches do not match users» demands for information and control.
Similar to research in behavioral psychology, research in privacy and usable security has focused mainly on Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. This excludes a large portion of the population affected by privacy implications of technology. In this work, we report on a survey (N=117) in which we studied technology-related privacy concerns of users from different countries, including developing countries such as Egypt, and Saudi Arabia, and developed countries such as Germany. By comparing results from those countries, and relating our findings to previous work, we brought forth multiple novel insights that are specific to privacy of users from under-investigated countries. We discuss the implications of our findings on the design of privacy protection mechanisms.
Whereas interruptions is a very active subfield of research within HCI, as of today interruptions in immersive virtual reality (IVR) have received only little attention. We conducted a lab study (N=20) with a head mounted display (HMD) to understand the relationship between presence, workload and attention in IVR when measuring three virtual interruption designs. The answer to this question is interesting because prior research has revealed a positive effect on performance when providing intelligent interruptions, for example based on users' level of attention. Our work launches research on interruptibility in IVR by investigating (1) the relationship between attention, presence and workload, and the (2) methods for measuring them in IVR. Our analysis suggests that a trade-off between presence and attention is required when designing interruptions for IVR. Our findings are valuable for researchers and practitioners who want to collect data on attention, presence and workload in IVR to inform interruptibility.
Recent work has demonstrated that functional Near-Infrared Spectroscopy has the potential to measure changes in Mental Workload with increasing ecological validity. It is not clear, however, whether these measurements are affected by anxiety and stress of the workload, where our informal observations see some participants enjoying the workload and succeeding in tasks, while others worry and struggle with the tasks. This research evaluated the effects of stress on fNIRS measurements and performance, using the Montreal Imaging Stress Task to manipulate the experience of stress. While our results largely support this hypothesis, our conclusions were undermined by data from the Rest condition, which indicated that Mental Workload and Stress were often higher than during tasks. We hypothesize that participants were experiencing anxiety in anticipation of subsequent stress tasks. We discuss this hypothesis and present a revised study designed to better control for this result.
Playing with toy blocks reveals patterns in children's play that are valuable for therapy and assessment. Following the 2011 Tohoku Earthquake and Tsunami in Japan, we witnessed young survivors expressing post-trauma stress in block play. Motivated by the limitations in assessing this behavior using traditional methods, our paper describes the design rationales of AssessBlocks, an action-characterizing system using smartwatch embedded toy blocks. Utilizing a smartwatch's Inertial Measurement Unit (IMU) and capacitive screen, a monitor is able to receive, visualize and document sequential and quantitative play actions that were empirically selected from a preliminary block therapy study on children's post-disaster stress. We also propose our vision of a multi-dimensional behavioral assessment system using actions obtained by AssessBlocks.
As conversational agents continue to replace humans in consumer contexts, voice interfaces must reflect the complexity of real-world human interaction to foster long-term customer relationships. Perceiving the personality traits of others based on the way they look or sound is a key aspect of how humans unconsciously adapt their communication with others. In an effort to model this complex human process for eventual application to conversational agents, this paper presents the results of (1) building SVM and HMM classifiers for perceived personality prediction using speech signals using a data corpus of 640 speech signals based on 11 Big Five personality assessments, (2) determining correlations between feature and speaker subgroups, and (3) assessing the SVM classifier performance on new speech signals collected and assessed through a user study. This work is a small step towards the greater goal of designing more emotionally intelligent conversational interfaces.
Personality affects various social behaviors of an individual, such as collaboration, group dynamics, and social relationships within the workplace. However, existing methods for assessing personality have shortcomings: self-assessed methods are cumbersome due to repeated assessment and erroneous due to a self-report bias. On the other hand, automatic, data-driven personality detection raises privacy concerns due to a need for excessive personal data. We present an unobtrusive method for detecting personality within the workplace that combines a user's online and offline behaviors. We report insights from analyzing data collected from four different workplaces with 37 participants, which shows that complementing online and offline data allows a more complete reflection of an individual's personality. We also present possible applications of unobtrusive personality detection in the workplace.
The growing ease with which digital images can be convincingly manipulated and widely distributed on the Internet makes viewers increasingly susceptible to visual misinformation and deception. In situations where ill-intentioned individuals seek to deliberately mislead and influence viewers through fake online images, the harmful consequences could be substantial. We describe an exploratory study of how individuals react, respond to, and evaluate the authenticity of images that accompany online stories in Internet-enabled communications channels. Our preliminary findings support the assertion that people perform poorly at detecting skillful image manipulation, and that they often fail to question the authenticity of images even when primed regarding image forgery through discussion. We found that viewers make credibility evaluation based mainly on non-image cues rather than the content depicted. Moreover, our study revealed that in cases where context leads to suspicion, viewers apply post-hoc analysis to support their suspicions regarding the authenticity of the image.
In 2014-2015, the U.S. Federal Trade Commission (FTC) commissioned a study to assess consumers' ability to recognize ads and other paid content in online search results and news/article feeds. The co-authors designed the study, oversaw its execution, and analyzed the results, with support from FTC staff. The goals of the research were to assess the effectiveness of methods that online services use to label ads, and to see if alternative methods of labeling ads could improve consumers' ability to recognize them. In a controlled experiment, 48 consumers interacted with both desktop and mobile Web pages that were captured from search and online magazine websites. In half of the conditions, the Web pages were modified based on established Web design guidelines to improve the clarity of ad labeling. The participants' behavior, comments, and eye movements were recorded. Initial findings of this experiment are: (a) consumers cannot always distinguish ads, paid content, and paid search results from unpaid content, and (b) improving the salience and placement of labels based on established UI design guidelines can improve consumers' ability to recognize ads, paid content, and paid search results. We conclude with implications of the results and areas for future research.
Wearable technologies are increasingly popular, but often abandoned. Given their highly personal nature, aesthetics and form factor play a key role in adoption and continued use, but thus far little work has focused on this. This paper presents a three-part study to better understand the role of aesthetics and personalisation within wearables. We provided 15 participants with customised, low-fidelity, non-functional "activity trackers", based on their own designs, for in the wild evaluation. Our participants' use of these prototypes provided us with insights into their feelings towards their existing commercial devices and their own designs alike. We found that aesthetics plays an important, and currently underappreciated, role in use and continued engagement, particularly when the context of use is considered. We suggest that manufacturers should embrace adaptability and DIY cultures, allowing end-users to customise their wearables and support them in appropriately choosing, and creating their own designs.
A mixed reality (MR)-based concept for supporting and optimizing the way operators work with automatic printed circuit board (PCB) assembly lines, is proposed. In order to enhance the work process' interface, users are outfitted with a head-mounted display (HMD), so they can both actively access process relevant machine data and passively receive system notifications in a heads-up display (HUD), instead of having to manually query the terminal of the machine of interest at its very location. This approach was implemented and tested in a field study with one of the assembly lines of an electronics manufacturing services (EMS)-company. 30 staff members were recruited as test subjects and 90% of them appreciated the system deployment, due to its noticeable additional benefits compared to the status quo.
Our research looks to understand how to best design manipulatives within a mixed-reality (MR) system for the classroom. This paper presents insights around how teachers currently use physical manipulatives to inform future MR designs in the K-5 classroom. Manipulatives are physical objects used for teaching; Examples include, coins, blocks, puzzles markers etc. K-5 teachers have been using physical manipulatives to help illustrate abstract concepts for decades. Physical manipulatives have proven high value for students [7] and their high level of adoption by grade school teachers makes them a potential candidate for introducing MR into the classroom. In this research, we use participatory design, journey maps and interviews to identify teacher challenges with current physical manipulatives and explore potential design directions for MR manipulatives in the classroom. Our preliminary findings suggest that MR could help improve autonomy around student learning and increase opportunity for collaboration between peers, as well as between teacher and student.
Teachers are trained to plan and conduct pedagogical activities. But, as these activities become richer -i.e. more collaborative, with more open resources, and building upon increasing number of digital tools- planning becomes increasingly important. We conducted contextual interviews with seven middle and high school teachers, about their practices in planning and conducting pedagogical activities. We found that teachers design scripts to guide them through the session and scripts for students to use independently. They adjust their scripts during a session and edit them afterward. They reuse old scripts, adapt scripts from other teachers, and from online and physical sources.We derive implications for the design of scripting tools: sup-porting scripts at multiple levels of detail, or annotations for adjusting scripts during and after teaching sessions.
Improving workforce diversity in high tech is an ongoing challenge. We are currently analyzing a survey with 403 participants from the US and found that IT processionals have different experiences based on their job role. HCI professionals evaluate the core factors regarding work experience (e.g., having a valuing team, being offered challenges and support, having local role models, and experiencing personal power) more negatively than people in other IT job roles. Based on these first results from our survey, we discuss the role that status differences between job types and the role HCI professionals perform in a product team may play in producing their more negative work experiences. Further analyses and research with product teams are needed to explore these dynamics and potential actions corporations can take.
Usability and user experience (UX) issues are often not well emphasized and addressed in open source software (OSS) development. There is an imperative need for supporting OSS communities to collaboratively identify, understand, and fix UX design issues in a distributed environment. In this paper, we provide an initial step towards this effort and report on an exploratory study that investigated how the OSS communities currently reported, discussed, negotiated, and eventually addressed usability and UX issues. We conducted in-depth qualitative analysis of selected issue tracking threads from three OSS projects hosted on GitHub. Our findings indicated that discussions about usability and UX issues in OSS communities were largely influenced by the personal opinions and experiences of the participants. Moreover, the characteristics of the community may have greatly affected the focus of such discussion.
When non-native English speakers (NNS) encounter messages they do not understand, they are often reluctant to ask native speakers (NS) for clarification. In this paper, we explored whether a conversation agent that asks clarification questions would increase NNS' willingness to ask questions. We compared two agents: one that asked for clarification about specific message elements and one that asked general clarification questions. NNS and NS rated how disruptive the agent was, the quality of the conversation, and whether they would feel embarrassed to ask their own questions. NNS found both types of agent less disruptive than NS did, but both found the specific agent more disruptive than the generic agent. NS rated the conversations higher in quality than NNS, but there was no effect of agent condition. We discuss potential of using conversational agents to boost NNS's confidence in conversation.
Previous research suggests that communication at international courses is usually multilingual. Students who speak the same native language may initiate course related discussions in either their own language or a common language shared by the whole class. However, when the native language of only a subset of students is used, it excludes others from participating in the conversation. The current study aims to understand when and why an exclusive native language will be used during communication at international courses. To do this, we conducted a 4-week diary study with 22 students taking the same class. These students come from 12 different countries but they all speak English as a common language. Through a preliminary analysis of the data, we extracted 4 scenarios where students chose their native language over English to conduct course related discussions. Based on these scenarios, we identified design opportunities to assist multilingual communication at international courses.
One design challenge of tangible reading systems is how to leverage the design of physical properties to best support the learning process. In this paper, we present an exploratory study which investigated how 18 young adults who learn English as a foreign language associated colours and materials to English letter-sound pairs. The preliminary results indicate that the letter-sound-colour mappings are influenced mainly by the literacy meaning of the letters while the letter-sound-material mappings are strongly affected by the characteristics of letter sounds. We discuss the design implications and future work for designing tangible reading systems for foreign language learners.
Young children adopt the media practices of their parents, which shape the possibilities for future "sustainable" media use as adults. This paper describes initial results from a small-scale study on sustainable family practices related to digital media use among American families with at least one child aged 6 or 7. Based on the analysis of eight interviews (with parents and children) this paper gives insight into ways that parents and children "act sustainably" towards digital media devices and content available in their homes. The work relies on recent advancements in media studies and sustainability in HCI. The paper tackles a problem space that affects the (online) well-being of today's young people. Its main contribution concerns providing insights into the implications of the current sustainable media use of families with young children for future generations.
Pro-environmental behavior has been widely promoted nowadays. While many empirical studies support that increasing social norms is an effective approach to motivate pro-environmental behavior, how it works in messaging is less understood. This study aimed to examine how one person sending his/her own pro-environmental behavior to a friend affects his/her friend»s and his/her own future pro-environmental behavior. We conducted an in-situ study and found that receiving message of pro-environmental behavior didn»t promote the receiver»s behavior. Instead, the senders, who did the assigned behaviors and messaged the behavioral to one of their friends, increased the perceived ease and convenience and continued to took the action after intervention period. We came to the conclusion that doing and messaging pro-environmental behaviors has an effect on fostering the behavior.
Smallholder coffee farmers in Latin America depend on global supply chains for their livelihood, and many join certified cooperatives to increase access to fair prices. In order to find out what a fair price is, we designed CalcuCafé, a tool for coffee farmers to calculate their cost of production. An evaluation in Peru uncovered tensions between coffee farmers and cooperative technicians, highlighting barriers to information transparency at the production level. Our ongoing work to address these barriers strives to support the long-term viability of smallholder coffee producers, a sizable yet marginalized group at the intersection of HCI for sustainable agriculture and HCI for development.
Cities are increasingly integrating urban technologies into their infrastructures to improve municipal services, civic engagement, and quality of life for residents. Research suggests that technologies implemented in communities can worsen existing inequalities, yet there is little understanding of what underserved residents think about urban technologies or how they engage with their cities about technology policies or practices. Based on two technology forums held in underserved communities, we found that residents are motivated to participate in city technology planning because they believe technology impacts the economic and social health of their communities and because they are wary of the city»s intentions behind certain urban technologies and policies. However, avenues for such engagement with the city were not accessible to our participants. We conclude that residents» participation in existing forms of governance poses an opportunity for city officials and those in underserved communities to collaboratively build urban technologies that benefit all.
Immigrants usually are pro-social towards their hometowns and try to improve them. However, the lack of trust in their government can drive immigrants to work individually. As a result, their pro-social activities are usually limited in impact and scope. This paper studies the interface factors that ease collaborations between immigrants and their home governments. We specifically focus on Mexican immigrants in the US who want to improve their rural communities. We identify that for Mexican immigrants having clear workflows of how their money flows and a sense of control over this workflow is important for collaborating with their government. Based on these findings, we create a blockchain based system for building trust between governments and immigrants.We finish by discussing design implications of our work and future directions.
In this late breaking work, we present preliminary results from a portion of an auto-ethnography in which an HCI scholar drove for both Uber and Lyft over the course of 4 months, recording his thoughts about the driving experience as well as his experiences with-and emails from-both platforms. The first phase of results we present here are based on several text analyses of the collected emails, as well as a preliminary examination of field notes in relation to these emails. We found that while Uber and Lyft participate in the gig economy in almost identical ways, the difference in tone apparent through each platform's messaging could lead to conflicting experiences for drivers. We identify implications for the potential future analyses of our autoethnographic data in relation to this psycholinguistic analysis.
Live streaming is a proliferating social media generating special social interactions on the platforms especially in China. For example, tipping in live streaming, in which viewers buy virtual gifts to reward the streamers. This paper explores people's tipping behaviors and how it impacts interactions between viewers and streamers. We analyzed 12 videos of live streams by labeling the viewer-streamer interactions, and found that viewers were motivated by the reciprocal acts of streamers. Furthermore, we interviewed 5 streamers to understand their motivations behind the viewer-streamer interactions. Our future study would investigate tipping motivations from the viewers' perspective.
With rising use of multiple social network sites (SNSs), people now have an increasing number of options for audience, media, and other SNS features at their disposal. In this paper, our goal is to build machine learning models that can predict people»s multi-SNS posting decisions, thus enabling technology that can personalize and augment current SNS use. We explore affordances--the perceived utilities of a SNS»s features-for creating these models. We build an affordance-based model using data collected from a survey about people»s multi-SNS posting behavior (n=674). Our model predicts posting decisions that are ~35% more accurate compared to a random baseline, and ~10% more accurate than predictions based on SNS popularity.
Much of the research on how technology can support better access to online information for people with intellectual disability focuses on a deficit model. Studies are typically based on a quantitative analysis of how various aspects of cognitive functioning might impact the usability of existing technologies. In this paper, we instead present new insights into what competencies and strategies young adults with intellectual disability, who are often digital natives, are already employing to meet their online information needs. In-depth observations of 12 people using one of two search technologies (web search or video search) were analyzed together with other field notes. Beyond the importance of visuals and usability, we unpack a different view on efficiency, and on the role played by emotional barriers, confidence and social support in the use of search results ranking lists.
We capitalize in this work on the motor performance criteria of the Kinematic Theory to examine stroke gestures articulated on touchscreens by people with motor impairments. We report empirical results on 278 stroke gestures collected from 7 participants with spinal cord injury and cerebral palsy, for which we show that the Kinematic Theory can successfully model their strokes and reflect their motor skills entailed by gesture articulation. To encourage and support future work on stroke gesture UIs for users with motor impairments, we show that computer-generated gestures can be successfully synthesized with the same geometric and kinematic characteristics of the gestures actually produced by people with motor impairments.
Current and emerging participative design practices are providing opportunities for people with intellectual disability to have a say in how technology can best support them and their individual needs. Yet technological experts and designers are not always confident to be included in co-design sessions with people with intellectual disability and often favour less inclusive projects to focus on. In this paper, we present lessons learnt from a series of co-design exercises aimed at designing mobile or web applications to support people with intellectual disability, including a reframing of the concept of reciprocity. We believe these lessons can serve as recommendations for IT experts or IT students, to be encouraged and enabled to design with people with intellectual disability, thus supporting a greater inclusion.
Individuals regularly face challenges when interacting with their mobile devices particularly if they are not tech savvy users. When such difficulties occur, individuals often rely on more knowledgeable users to overcome difficulties. However, many do not have a support network of knowledgeable individuals available. Moreover, some challenges go beyond the need for guidance, as for example difficulties in performing swipes for motor impaired people. In this paper, we propose Data Donors, a conceptual framework proposing the enablement of users with the capacity to help others to do so by donating their mobile interaction data and knowledge. Inspired by charitable donations, we outline the Data Donors framework and discuss three applications that are being developed under the data donning paradigm. Through this work, we will explore the consequences and opportunities of sharing ones' data for the greater good and discuss the creation of global data donation programs.
A subset of "Situationally Induced Impairments and Disabilities" (SIID), termed "Severely Constraining Situational Impairments" (SCSI), was explored at the user task and motivational level, to better understand the challenges faced by users attempting to perform tasks using a mobile device. Through structured interviews, participants were found to deploy workarounds in attempting to complete mobile I/O transactions, even if that workaround might place them in considerable danger. The motivations underlying user decisions were also explored resulting in a set of rich scenarios which will be used in the final participatory design stage of the study to discover ways that technology can be designed to overcome SCSIs.
Mobility impairments can prevent older adults from performing their daily activities which highly impacts a person's quality of life. Exoskeleton technology can assist older adults by providing additional support to compensate for age-related decline in muscle strength. To date little is known about the opinions and needs of older adults regarding exoskeletons as current research primarily focuses on the technical development of exoskeleton devices. Therefore, the aim of this paper is to inform the design of exoskeletons from a human-centered perspective. Interviews were conducted with seven older adults and six healthcare professionals. Results indicated that exoskeletons can be a valuable addition to existing mobility devices. Accepting the need for mobility aids was found to be challenging due to associated stigmas. Therefore, an exoskeleton with a discreet appearance was preferred. Ultimately, the willingness to use exoskeleton technology will depend on personal needs and preferences.
Personal Informatics (PI) systems allow their users to collect data from a variety of sources for the purpose of extracting meaningful insights and making positive changes in their lives. Emerging consumer-grade Brain-Computer Interface (BCI)/EEG devices may provide an additional source of data for incorporating into PI systems. To explore users' expectations for brain-related PI systems we provided participants with a consumer-grade BCI headset and prototype mobile application capable of visualizing and recording their brain waves. Participants were interviewed to assess expectations for this type of technology. Our work contributes an understanding of users' various motivations for tracking brain activity data within a personal informatics system. We present our findings so far and discuss their implications for the design of a Cognitive Personal Informatics system, which we intend to deploy in a follow-up longitudinal field study.
As smart speakers such as the Amazon Echo have grown in popularity, these devices have presented new opportunities for exploring voice interaction, particularly in the domain of exercise. We have created TandemTrack, a multimodal system comprised of a mobile app and an Alexa skill whose capabilities include exercise guidance, data capture, feedback, and reminder. We propose using TandemTrack in a deployment study to explore and analyze individuals' preferences and use of voice and visual interactions, and to test TandemTrack's effectiveness as a whole in promoting consistent exercise habits while additionally comparing the effectiveness of the app against the skill. Not only does TandemTrack make important steps in addressing gaps in currently available exercise assistants, but also, more importantly, we believe this is the first study to investigate the specific value that voice interactions may provide over other technologies in promoting consistent exercise.
The paper presents a case study on how older adults' interactive art making activities improve their quality of life. Research has shown that art-making is an effective way of working with older adults to combat social and health-related issues. We conducted a case study of the interactive art workshop for eight weeks at a local art council. The art workshop consisted of three major activities that integrated with simple interactive techniques: interactive sound painting, light-up card, and interactive soft object. Overall, participants were very engaged throughout all activities. We found that interactive technology in art/crafts activities held great potential for positively engaging older adults and significantly improving their daily lives by inspiring creativity and self-expression, fostering collaborations and intergenerational relationships.
Older adults experience various physical, cognitive and emotional decline over the years, often leading to a waning of mobility skills up to a point of avoiding walking or traveling altogether. Since mobility is extremely important in maintaining an intact sense of independence and well-being as well as social interaction, the goal of this research is to gain insight into the mobility challenges that older adults face, and to define requirements for technology that would support their mobility.
As older adults (OAs) approach retirement, their financial management requirements change as they shift from income to pension or other assets. However, existing interactive budgeting apps neither support this transition, nor facilitate long-term financial planning. Our research aims to understand OAs' technological, educational, and behavioural barriers toward the adoption of budgeting applications. It also aims to uncover the design requirements for long-term financial planning apps that would overcome these adoption barriers. For this, we conducted a contextual inquiry to understand seniors' financial management practices. In-depth qualitative data collected both from individual sessions and participatory design activities has revealed significant gaps between the capabilities of existing apps; the best practices around long-term planning; and the attitudes and behaviours of OAs. We present here an argument, based on the preliminary analysis of the field data, for approaching the design of senior-centred interactive budgeting apps from a behavioural change and educational perspective.
We present mirrorU, a mobile system that supports users to reflect on and write about their daily emotional experience. While prior work has focused primarily on providing memory triggers or affective cues, mirrorU provides in-situ assessment and interactive feedback to scaffold reflective writing. It automatically and continuously monitors the composition process in three dimensions (i.e., level of detail, overall valence, and cognitive engagement) and provides relevant feedback to support reflection. In a 24-subject pilot deployment, we found that such scaffolding and feedback could help users compose longer reflections with more positive emotion words as well as more insight and causal words. We discuss how the literature on emotional writing informed mirrorU's design, and highlight the major findings as well as the lessons learned from the pilot study.
The paradigm of ubiquitous computing has the potential to enhance classroom behavior management. In this work, we used an action research approach to examine the use of a tablet-based behavioral data collection system by school practitioners, and co-design an interface for displaying the behavioral data to their students. We present a wall-mounted display prototype and discuss its potential for supplementing existing classroom behavior management practices. We found that wall-mounted displays could help school practitioners to provide a wider range of behavioral reinforces and deliver specific and immediate feedback to students.
Choptop is an interactive chopping board that provides simple, relevant recipe guidance and convenient weighing and timing tools for inexperienced cooks. This assistance is particularly useful to individuals such as students who often have limited time to learn how to cook and are therefore drawn towards overpriced and unhealthy alternatives. Step-by-step instructions appear on Choptop's display, eliminating the need for easily-damaged recipe books and mobile devices. Users navigate Choptop by pressing the chopping surface, which is instrumented with load sensors. Informal testing shows that Choptop may significantly improve the ease and accuracy of following recipes over traditional methods. Users also reported increased enjoyment while following complex recipes.
Smart buildings generate a wealth of data about the spaces they contain. Yet, in evaluating them against occupant needs, sensor data alone is insufficient. Our contribution lies in a re-framing of smart building spaces around the human factor, and a critical lens on the criteria used to evaluate buildings. We propose future work on participatory technologies to evaluate complex and heterogeneous built environments with the people who live and work in them, recognising that their expertise is invaluable in creating quality spaces and ensuring their ongoing and sustainable use.
Zaatari, the world's largest Syrian refugee camp, currently hosts around 80,000 Syrian refugees. Located in the desert, the camp has become the fifth biggest city in Jordan. Previous examinations of crisis-housing in refugee camps have assessed re-appropriation of shelters in order to improve functionality. In this paper, we show how interior adornment serves a purpose in refugee lives that goes beyond that of functionality. Our analysis of fieldwork conducted in Zaatari camp show how decorating provides an escape from the camp and compensates for loss of identity, home and leisure. Within contexts of austerity, decorating spaces is a valuable and vital aspect of living, coping and supporting people's sense of identity and pride. Through painting and decorating both public and private 'spaces', refugees transform them into 'places', creating a sense of home. We highlight how the capability of decorating, crafting and making is an enactment of freedom within contexts of political restrictions and resource limitations.
When a group of citizens wants to tackle a social problem online, they need to discuss the problem, possible solutions, and concrete actions. Instant messengers are a common tool used in this setting, which support free and unstructured discussion. But tackling complex social problems often calls for structured discussion. In this paper, we present Micro-NGO, a chat-based online discussion platform with built-in support for (1) the problem-solving process and (2) the action planning process. To scaffold the process, Micro-NGO adopts a question prompting strategy, which asks relevant questions to users in each stage of the problem-solving process. Users can answer the questions and vote for the best answer while they freely discuss in the chat room. For an informal evaluation, we conducted a pilot study with two groups (n=7). The participants held a discussion while collectively answering the question prompts and reached consensus to send a petition letter about campus issues to the related personnel.
Refugee integration is a long process that follows resettlement into a country where refugees not only face language and culture barriers but also difficulties integrating into the workforce, receiving a good education and accessing healthcare. In addition to the UNHCR, the UN refugee agency, there are countless organizations and nonprofits around the world focused on trying to facilitate this process and provide support where needed. This paper presents preliminary user research undertaken in Clarkston, Georgia (USA), to study the process. We used qualitative research methods including contextual analysis and semi-structured interviews to evaluate the ease of integration for refugees who have relocated to Clarkston. We took a human-centered design approach to identify the gaps in the current process and to present preliminary design recommendations.
This paper presents an augmented reality (AR) system that supports early reading and spelling acquisition of English for children. The design of the AR system was based on and extends our prior tangible reading system called PhonoBlocks. In this paper, we discuss why and how we extend the work from PhonoBlocks to an AR system design. The goal of our system is to teach children explicit letter-sound correspondences. Two key design features of our system, which are different from other systems, are the use of augmented dynamic colour cues and 3D physical lowercase letters that help to draw children's attention to how letters' positions in words change letter sounds. Unlike our previous system, our AR system uses off-the-shelf technology so it can be easily scaled and distributed. We discuss the advantages and opportunities for our AR solution, and outline our future plans for evaluating this system.
Evolving technology and growing connectedness of devices allow more opportunities for video consumption, and greater integration in our everyday activities. This study conducts exploratory research on video interaction to better understand how people currently seek, avoid, and attempt to control video content. Data was collected by a semi-structured interview process with 10 participants. Three interesting trends emerged in people's video watching behavior: (1) Social context and obligations change behavior, (2) a preference for more participatory parental monitoring techniques, and (3) the importance of storyline in video viewing behavior. These findings can help inspire future research, and help designers and technologists recognize the complexities and dynamic nature of how people watch and attempt to control their video content. Designing with this knowledge can improve users» experience when they consume media content in the form of videos.
The use of voice-activated virtual assistants (VAs) to execute tasks, request information, or entertain oneself on the smartphone is on the rise. However, insufficient feedback on the states of VAs may impair the interaction flow. We propose to use nonverbal emotional expressions to indicate a VA's conversational states and promote user engagement. We introduce three emotional expression designs of VA, iconic facial expressions, a text box with body movements, and emotional voice waveforms. Our user study results show that a VA with an expressive face or text body movements can evoke stronger user engagement than the one with voice waveforms.
Humans sense most of the environment through their eyes. Hence, gaze is a powerful way to estimate visual attention. Head-mounted or mobile eye tracking is an established tool to analyze the visual behavior of people. Since these systems usually require some kind of calibration prior to usage, a new generation of mobile eye tracking devices based on corneal imaging has been investigated. However, little attention has been given on how to analyze corneal imaging specific eye tracking data. A classic approach in state-of-the-art systems is to extract different eye movements (e.g., fixations, saccades and pursuits movements). So far, there is no approach for applying these methods to corneal imaging data. We present a proof-of-concept method for fixation extraction and clustering of corneal imaging data. With this method we can compress the eye tracking data and make it ready for further analysis (e.g., attention measurement and object detection).
We propose an interaction restraint that aims to degrade interactivity of a device, for example, by asking users to perform a mandatory cognitive task whenever they start an interaction. This mechanism is designed to help users to self-reflect upon their interaction intent with the devices, and thus they can break the habit of unconscious frequent access to their smartphones. We perform a preliminary study to understand the design requirements of the cognitive tasks and develop a high-fidelity prototype. Our field trial clearly documents that a positive influence of interaction restraints on deterring habitual frequent use of smartphones.
A key requirement of successful online marketing is to maintain the quality of hyperlinks. However, it is not uncommon for users to get confused or disappointed by a wide range of misalignments between links and their landing pages. This paper presents an online survey that identifies types of such misalignments perceived by recipients.
While wearable devices for fitness have gained broad popularity, most are focused on tracking general activity types rather than correcting exercise forms, which is extremely important for weightlifters. We interviewed 7 frequent gym-goers about their opinions and expectations for feedback from wearable devices for weightlifting. We describe their desired feedback, and how their expectations and concerns could be balanced in future wearable fitness technologies.
We investigate a novel, non-visual approach to overviewing object-oriented source code and evaluate the efficiency of different categories of sounds for the purpose of getting an overview of source code structure for a visually-impaired computer programmer. A user-study with ten sighted and three non-sighted participants compared the effectiveness of speech, non-speech and spearcons on measures of accuracy and enjoyment for the task of quickly overviewing a class file. Results showed positive implications for the use of non-speech sounds in identifying programming constructs and for aesthetic value, although the effectiveness of the other sound categories in these measurements are not ruled out. Additionally, various design choices of the application impacted results, which should be of interest to designers of auditory display, accessibility and education.
ESports tournaments, such as Dota 2»s The International (TI), attract millions of spectators to watch broadcasts on online streaming platforms, communicate, and share their experience and emotions. Unlike traditional streams, tournament broadcasts lack a streamer figure to which spectators can appeal directly. Using topic modelling and cross-correlation analysis of more than 3 million messages from 86 games of TI7, we uncover main topical and temporal patterns of communication. First, we disentangle contextual meanings of emotes and memes, which play a salient role in communication, and show a meta-topics semantic map of streaming slang. Second, our analysis shows a prevalence of the event-driven game communication during tournament broadcasts and particular topics associated with the event peaks. Third, we show that »copypasta» cascades and other related practices, while occupying a significant share of messages, are strongly associated with periods of lower in-game activity.
Research on gamified educational platforms has chiefly focused on game elements motivating continued engagement, neglecting whether and why people choose to use them in the first place. Grounded in Uses & Gratifications Theory, this study therefore combined use diaries with follow-up interviews to explore the situated reasons for use of 83 students who voluntarily used a gamified online learning platform. Partial data analysis suggested a motivational threshold of gamification: game design elements don»t motivate the initiation of new use sessions per se, but are able to prolong an already started session. Some other pre-existing sought uses and gratifications are required for gamification to work, although gamification may indirectly support these. Main reasons for initiating use of a gamified learning platform were learning, curiosity, fun, need for closure, and competence.
Demand for mental health services often cannot be met, resulting in high costs and lengthy wait times. In response to this problem, technological solutions have been proposed, such as computerised cognitive behavioural therapy (cCBT) and online peer to peer (P2P) support groups, which have been shown to be effective for non-severe cases of anxiety and depression, and can sometimes exceed the effectiveness on F2F therapy. However, P2P support is often viewed as inferior, and this perception has demotivated participation in existing cCBT P2P platforms by potential peers. To address this perception, we are designing a novel cCBT P2P game that uses gamification techniques to motivate players to participate as helping peers. Specifically, we are examining the Proteus Effect, which purports that a player will adopt qualities of their avatar in a contextual narrative.
In this paper, we describe a comparative analysis of several emotional reporting techniques for assessing gameplay experience. We compared newer feedback methods, such as the Sensual Evaluation Instrument (SEI) and recent tools for capturing physiological data, with more traditional techniques such as think-aloud, the Immersive Experience Questionnaire (IEQ), and retrospective interviews. We present results from an exploratory study with 7 participants who all played the same game and gave feedback on their emotional experience using this suite of methods. Preliminary results indicated that mixed methods offer complementary strengths in registering player emotion, are useful at different moments of play, and help to accommodate differences in the ways players' emotions manifest. We argue for the value of using multiple methods in evaluating players' emotional response to games.
Players can build implicit understanding of challenging scientific concepts when playing digital science learning games [7]. In this study, we examine implicit computational thinking (CT) skills of 72 upper elementary and middle school students and 10 computer scientists playing a game called Pizza Pass. We report on the process of creating automated detectors to identify four CT skills from gameplay: problem decomposition, pattern recognition, algorithmic thinking, and abstraction. This paper reports on hand-labeled playback data obtaining acceptable inter-rater reliability and 100 gameplay features distilled from digital log data. In future work, we will mine these features to automatically identify the CT skills previously labeled by humans. These automated detectors of CT will be used to analyze gameplay data at scale and provide actionable feedback to teachers in real-time.
Textile production is a large and profitable industry that still struggles with issues relating to environmental impact and societal concerns like labour rights. Incorporating sustainable practices to reduce these issues will be highly beneficial for future generations. Potential measures exist on both an industry and company level, as well as in the purchase behaviour of individual consumers. We present Textile Manager, a persuasive game designed to encourage players to consider their own textile-related behaviour. Based on expert interviews, we discerned goals for a persuasive game to create awareness about issues in traditional textile production. Textile Manager presents a proof-of-concept prototype that was evaluated with a pre-post exposure study. We report findings regarding persuasive effect, voluntary information quests, and visualization of consequences for player decisions to contribute towards the persuasive technology research community.
The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.
Video has been used to give people sensations or insights into another person's perspective via providing real time feeds of first person viewpoints. Less explored is rapid and dynamic perspective changing that can introduce uncertainty for users concerning whether their feeds are manipulated, and if they are viewing themselves or another. We report on initial trials of a 3-person wireless headset system. Each headset incorporated an external video camera and an internal screen that provides its wearers with visual information. Camera transmissions are rapidly and automatically switched to be received by different headsets, thus providing wearers with a continuous cycling through 1st, 2nd and 3rd person perspectives. Based upon testing several physical games activities, we offer suggestions to assist developers in designing intense embodied perspective changing experiences.
Songket is a traditional cultural heritage and national identity of Malaysia. Yet the hand-weaving practices are increasingly endangered and we know little about the current challenges faced by rural songket weavers. This paper interviews with 12 home-based weavers from a Malay village. We recognized two key actors in songket supply and demand chain: weavers and middlemen, and outlined the motivations and challenges of three types of weavers, alongside the multiple roles played by the middleman. We concluded with three design implications supporting songket cultural heritage preservation and weavers» economic empowerment.
Our co-located social relationships are changing with the adoption of new technologies. Augmented Reality (AR) performed on clothing can bridge the gap between our online social media and our face-to-face interactions. We propose a new system composed of an online application that generates a personalized and artistic design for the user»s clothing based on personal interests extracted from a social network. We also present an AR application to explore this social design and improve icebreaking interactions and connections between strangers in community settings and social gatherings.
This paper describes an early design research exploration into the potential of folds and pockets to serve as places for safekeeping and secrecy in wearables. We explore what such secrecy may mean through woven data codes. We report on early material exploration, a pilot study with ten participants, and the personalization of a data object. We then outline, how we will make use of these early indications to build future stages of the project.
indfulness meditation has significant benefits for health and well-being but requires training. A wealth of mindfulness meditation apps have been developed in the last years. However, there has been liMassachusetts Institute of Technologyed academic work evaluating these technologies. This paper reports an auto-ethnographic and expert evaluation study of 16 most popular iPhone mindfulness apps in the UK. Findings indicate that these apps focus mostly on guided meditation, with liMassachusetts Institute of Technologyed support for monitoring intrinsic meditation processes and measuring the effectiveness of the training. We propose a more nuanced discourse around such apps concluding with implications for design; including new tools for supporting intrinsic meditation processes and bodily kinetic aspects fostering mindfulness, together with the call for developing guidelines for evaluating the effectiveness of such applications.
We explore an unconventional format for representing the structure of online courses-beaded representations. We used this format as a mediational tool to engage a design team in reflective discussion about the design of its courses. We discuss challenges associated with the design of "massive open online courses" (MOOCs) and position beaded representations within the context of human-computer interaction (HCI) literature on materiality, novel representational forms, and the use of boundary objects to support design teams. We describe the outcomes of a focus group session with design team members mediated by the beaded representations, which include: (1) discovery of curricular connections, (2) understanding of learner experience, (3) insights about the design process, and (4) reflection on the method.
Peer tutor learning has been effectively adopted in various learning contexts. It draws attention from the CHI community as it has the potential to be advanced through computer-mediated online video technologies. In this paper, we focus on understanding how Q&A (questions and answers) between tutors and their students is used to promote learning using online videos. Building upon our prior work, we designed and implemented a new Q&A interaction. Through conducting a field study and a follow-up survey of 47 graduate students, our research reveals gaps between a peer tutor's strategy (i.e., question position and knowledge type) and a learner's preference and answering effort. For example, compared to tutors, learners preferred that questions be inserted in the middle of the videos and that the answers to tacit knowledge-related questions be more numerous and longer than those for explicit knowledge-related questions. The design implications from our findings need to be incorporated into practice and into future research of online videos for peer tutor learning.
Public advice columns have provided information, satisfied reader curiosity, and ignited discussion since the late 1600s. The role of the advice columnist can be understood as a form of cultural intermediary who identifies and assesses individual problems of potential relevance to wider audiences. Online columnists are now joined in this analysis of the human condition by communities of contributors who offer supporting or alternative judgments and directives for action. We examine the potential of these online advice columns as a material resource for assisting novice designers in identifying and understanding authentic human problems from multiple perspectives. We present insights from a small pilot study where university students used this design method as a framing mechanism for proposing socio-technical interventions. We conclude with consideration of the value, optimal usage, and limitations of adopting this approach in generating design ideas and training novice designers.
Biological HCI (Bio-HCI) framework is a design framework that investigate the relationship between human, computer and biological systems by redefining biological materials as design elements. Bio-HCI focuses on three major components: biological materials, intermediate platforms, and interactions with the user. This framework is created through collaboration between biotechnologists, HCI researchers, and speculative design researchers. To examine this framework further, we present four experiments which focus on different aspects of the Bio-HCI framework. The goal of this paper is to 1) layout the framework for Bio-HCI 2) explore the applications of biological - digital interfaces 3) analyze existing technologies and identify opportunities for future research.
While humans possess a well-developed sense of direction and can easily walk to a visible target, that ability is drastically reduced when lacking visual cues. In situations where people cannot depend on sight, orientation might become a critical issue, as when escaping a room filled with smoke, swimming in open waters, hiking in the fog or crossing the woods at night. In this paper, we present the design and implementation of VOS - a Visual Orientation System for providing an augmented sense of direction. Our system uses LEDs to offer cues on how to correct the current heading. Our findings consist of demonstrating the viability of such system, as well as its usability. We discuss the implications for designing technology that enables people to orient themselves and navigate places with little or no visual cues.
Sensalert is a wearable health tracking application that enables monitoring of a target population's health status in real-time. The application integrates health monitoring parameters from wearable sensors, e.g., temperature and heart rate, with relevant environmental parameters, e.g., weather and topography, and calculates the corresponding physiological strain index for each point in time. The objective of this application is to provide early warning, support decision making, and increase situational awareness by displaying comprehensive information and providing alerts at the right time and place. This application is the first of its kind being developed for integration into the Defense Threat Reduction Agencies Bio Surveillance Ecosystem (BSVE). We present the user requirements and design principles for developing this type of application. We hope this work helps and inspires future research in the area of wearable technologies for surveillance.
Aesthetics qualities are critical aspects for smart jewelry as they are worn and considered as expressive artefacts. However, current tools for prototyping smart jewelry do not put aesthetic considerations as a primary concern. Therefore, we created Snowflakes, a design speculation for a modular, "snap-on-off", prototyping tool for designing smart jewelry. The design requirements of Snowflakes were determined after studying non-smart jewelry and extracting 7 parameters for them (limb, material, grip, fastener type, decoration, decoration placement and form). Drawing upon these parameters, Snowflakes were proposed as a tool that would allow prototyping smart jewelry by synthesizing conventional jewelry's form language with smart jewelry which is adorned with technology. This paper explores using this product as a design tool to experiment on designs blending aesthetics and function.
Proprioceptive awareness is an essential but challenging skill to master. In HCI physical training research, the design space of how technology can help people to develop such awareness remains narrow. Here, we present a technological device that exteriorizes misalignments of different body parts by translating them to haptic feedback. We present preliminary insights gained during the design process and device testing, and trace the future steps of its technological development.
Mobile devices generate a tremendous number of notifications every day. While some of them are important, a huge number of them are not of particular interest for the user. In this work, we investigate how users manually defer notifications using a rule-based approach. We provide three different types of rules, namely, suppressing, summarizing once a day, and snoozing to a specific point in time. In a user study with 16 participants, we explore how users apply these rules. We report on the usage behavior as well as feedback received during an interview. Last, we derive guidelines that inform future notification deferral systems.
We bring the concepts of emotional self-awareness and emotion sharing to the problem space of Mixed Reality and by adding our observations on the influence of the physical space on emotions, we speculate whether we can mediate the spatial experiences of collocated people to increase their emotional self-awareness and ease emotion sharing between them. Our study is structured around a conceptual system we designed, one that mediates the spatial attributes of the surroundings according to the user's and collocated people's emotional states. After the initial ideation of the concept, we built a prototype application for HoloLens. We then conducted a 2-legged user study with 12 participants; first we gathered data using an online survey, followed by a study with thinking aloud protocol using the prototype. We believe that the results and discussion we provide in this paper will be useful in development of emotionally aware Mixed Reality systems.
Goal setting is a theoretically grounded, empirically supported, and one of the most popular behavior change strategies employed by activity trackers -- individuals set daily activity goals (e.g., 10000 steps) and continuous feedback on their performance is provided by the tracker. But while empirical studies are in abundance, we still have a liMassachusetts Institute of Technologyed understanding of users' real-life practices with goal setting, which may well vary from what is theoretically assumed, and can have detrimental effects on behavior change. We present an ongoing study, consisting of an online survey, seeking to inquire into what goals people set, how they engage with feedback and the impact this has on their physical activity.
In this ongoing study, we aim to redesign an existing dynamic digital checklist application (app) for trauma resuscitations in a regional trauma center. The design followed an iterative, user-centered approach. Trauma team physician leaders and research coordinators at the center participated in a survey and usability study to provide feedback for improving the user interface. Proper optimization of the user experience is necessary for future adoption of the digital checklist. This study lays the groundwork for in situ use and evaluation of the checklist by trauma team members.
While user experience assessment enables understanding users' perception about a product, limitations have been encountered when elders use questionnaires to evaluate user experience. In this paper we present the design process of Aestimo, a tangible interface to assist elderly people when evaluating the user experience of interactive prototypes. Our prototype is a simplification of the AttrakDiff questionnaire, which gives a chance to record one's overall opinion (i.e., speech) and emotions. In addition, our design uses playful interaction styles that are familiar to the elderly. In a preliminary evaluation, elderly found Aestimo entertaining and easy to use. As future work, we aim to explore new materials in building Aestimo and to perform a comprehensive evaluation with several elders.
Designing navigation aids for older pedestrians could help them stay autonomous in their daily activities. Here we present a description of the navigation experience of older pedestrians either with a visual (augmented reality glasses) or an auditory (bone conduction headset) wearable device providing guidance messages adapted to age-related declines. The study, with 18 participants, analyses the navigation performance and users' experience using explicitation interviews. We highlight that sharing attention, trusting the system temporality and experiencing the discretion of the device are the main concerns for older people, impacting the quality of their user experience. These findings are discussed in terms of design recommendations for digital devices.
This paper explores how seniors perceive Voice Enabled User Interfaces (VUIs) and the factors that shape those perceptions. An experiment was administered to 15 seniors (over age of 65), in which the participants searched for information using a traditional keyboard/mouse interface and an experimental voice/touch interface. An analysis of the data collected showed that seniors perceive meaningful differences between the two interfaces in terms of learnability, usability, ease of understanding and helpfulness.
This paper reports findings from a preliminary experiment in which we designed and tested two interface augmentations for enhancing credibility judgments of news stories on Facebook. We find that users' credibility judgments can be improved by the two augmentations, though the changes in credibility scores were not statistically significant. However, participants spent longer using the design that gave them control over the evaluation process, and appeared to be more confident about the choices they made using it - despite the fact that their judgments were actually less accurate. We outline directions for future work based on these findings.
Nudge theory has been widely used in government and health interventions for the subtlety with which it positively influences choices. But, can this gentle nudge influence users to exhibit secure online account behaviors? And further, which of the nudge types is most effective? We conducted a between-subjects experiment with five conditions, plus a control, in which participants (n = 263) experienced different types of nudges (incentives, norms, default, salience, ego) before indicating their comfort levels with an auto-generated password and intentions to create a new password. Results show the salience nudge most effectively reduced one's comfort level with keeping the auto-generated password. These preliminary findings imply nudges using multiple psychological effects could serve as important design cues, thereby making intended behaviors easy to perform.
Recent research and developments in HCI allow imagining what kind of sensory devices could be used for pedestrian navigation in the future. This study was aimed at questioning participants' expectations for their future pedestrian mobility and the acceptability of five futuristic sensory devices (a smart lens, vibrating clothes, a music app, an olfactory plug and a smart map). Results mainly highlighted an apparent mismatch between participants' expectations (looking at the surroundings and walking safely) and prospective acceptability of the devices (map is preferred while requiring attention sharing). Thus, the relevance of the common acceptability criteria for prospective research about navigation aids and sensory wearables was discussed.
On the surface, task-completion should be easy in graphical user interface (GUI) settings. In practice however, different actions look alike and applications run in operating-system silos. Our aim within GUI action recognition and prediction is to help the user, at least in completing the tedious tasks that are largely repetitive. We propose a method that learns from a few user-performed demonstrations, and then predicts and finally performs the remaining actions in the task. For example, a user can send customized SMS messages to the first three contacts in a school's spreadsheet of parents; then our system loops the process, iterating through the remaining parents.
First, our analysis system segments the demonstration into discrete loops, where each iteration usually included both intentional and accidental variations. Our technical innovation approach is a solution to the standing motif-finding optimization problem, but we also find visual patterns in those intentional variations. The second challenge is to predict subsequent GUI actions, extrapolating the patterns to allow our system to predict and perform the rest of a task. We validate our approach on a new database of GUI tasks, and show that our system usually (a) gleans what it needs from short user demonstrations, and (b) autocompletes tasks in diverse GUI situations.
The 'blank slate problem' - the difficulty of initial use upon using a new program - is a common user experience problem. In this paper, we explore one solution: importing data from a similar product. Our team created a feature for our browser onboarding process that automatically imports data from the user's pre-existing browser. We then conducted remote, unmoderated usability studies and quantitative experiments. Our initial findings show that autoimport can solve the blank slate problem by getting users started quickly with the browser, but introduces issues related to perceived privacy and performance. We provide recommendations that can impact design decisions related to user data and user perceptions of their privacy, particularly as it relates to tech proficiency.
Augmented reality (AR) is a new and unexplored means of immersing users in the application they are using, especially games. However, trackerless AR is only now becoming widely available to both users and developers, and there are few guidelines or conventions developers can reference for design heuristics. With the recent push for the creation of more AR apps from the two leading mobile platforms, there is a need to create new heuristic guidelines for user interactions with augmented reality user interfaces (UI). Our research compares and contrasts what could be called a traditional mobile UI, which consists of menu options appearing in screen space, and an AR interface, where the menu options are placed relative to the object they affect. There are recommendations for this type of interaction and starting with small scale tests like ours is the first step to solidifying an interface standard.
To understand visitor experience is an important task for curators to provide visitors with a suitable experience and learning context. But due to human memory liMassachusetts Institute of Technologyations, it is often not enough to rely on the description of overall experience of visitors in the post-interview session. Here, we developed a system that uses the commercial JINS MEME glasses and our own analysis software to help curators explore possible experiential touch points throughout the visit journey of museum. This study shows that eye movement data and visit contexts could be used together to help curators, in addition to measuring the overall visitor experience, focus more on visitor experience at special moments in order to more effectively design and plan exhibition and museum services.
Feelings of stress can have negative impacts on mental and physical health. In response, a significant number of stress management applications (apps) have been developed but little is known about their functionality. We conducted a feature analysis of 26 stress management and monitoring apps to identify required improvements. We found that the reviewed apps supported users with reflection, but did not include adequate functions to support action taking (i.e. initiating and maintaining behaviour change). This paper contributes a discussion of how to improve the design of stress management apps with examples of good practice and how healthcare providers can use this information to leverage such apps in clinical care.
Multiscreen TV viewing refers to a spectrum of media productions that can be watched using TV and companion screens such as smartphones and tablets. In the last several years, companies are creating companion applications to enrich the TV viewing experience, but viewers are demotivated to consume them because they have to download dozens of second screen applications. This paper proposes to integrate the creation of companion screen content in a single object-based preproduction tool. It identifies, from the perspective of TV production professionals, the best paradigm and the needed features to support content authoring for multiscreen viewing experiences.
We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices.
In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.
SweepScreen is a digital paintbrush enabling the printing of low-fidelity lasting free form images. It is made of a row of electromagnets that turn on and off in a specific pattern while the user sweeps across the magnetophoretic surface, thus producing a custom image. Despite being low fidelity (black & white, static images), SweepScreen works on passive surfaces to create bistable images. It can therefore replace physical paper on notice boards, packaging or other labeling systems. It can also be used to fast prototype static free form displays to better understand interactions with new display form factors. This could be of interest for the shape-changing community which struggles to access manufactured free form displays. We present our concept and proof-of-concept prototypes that can print images on flat free form surfaces. We show the results of performance evaluations to test the viability of the device and finish with a discussion about current limitations.
Given the advancements in ubiquitous computing we can nowadays link information to places and objects anywhere on the globe. For years, map-based interfaces have been the primary interfaces to browse and retrieve this situated media. While more recently also other interface concepts for situated media have found their way out of the research labs, most notably Augmented Reality, this work looks into a concept that has widely been ignored: Accurate pointing in outdoor environments. This works presents Urban Pointer a phone-based pointing interface that utilizes computer vision to enable accurate pointing in urban environments together with some first insights on the implementation.
UPDATED-22 February 2018. In this paper, we describe a novel electronic musical interface, consisting of a strand-like object that can be physically transformed by bending to create various shapes and signifiers. Users are encouraged to explore a visual language in a musical context, as each new shape portrays a different musical instrument with unique sonic behavior. We apply concepts from research in the area of cross-modal perception as guidance for mapping shapes and signifiers to corresponding sounds.
We use smartphones and their apps for almost every daily activity. For instance, to purchase a bottle of water online, a user has to unlock the smartphone, find the right e-commerce app, search the name of the water product, and finally place an order. This procedure requires manual, often cumbersome, input of a user, but could be significantly simplified if the smartphone can identify an object and automatically process this routine. We present Knocker, an object identification technique that only uses commercial off-the-shelf smartphones. The basic idea of Knocker is to leverage a unique set of responses that occur when a user knocks on an object with a smartphone, which consist of the generated sound from the knock and the changes in accelerometer and gyroscope values. Knocker employs a machine learning classifier to identify an object from the knock responses. A user study was conducted to evaluate the feasibility of Knocker with 14 objects in both quiet and noisy environments. The result shows that Knocker identifies objects with up to 99.7% accuracy.
We show ExtensionClip, a system that offers back-of-device touch interaction for a cardboard head-mounted display; it links the back of the smartphone to its touchscreen to greatly enhance the touch input area and usability. ExtensionClip sandwiches the smartphone between two magnets; the front magnet, which tracks the position of the back magnet, triggers touch events on the smartphone»s touchscreen by capacitive coupling. ExtensionClip detects the user»s touch of the back magnet from a change in the size of the touch area on the smartphone»s touchscreen; thus ExtensionClip offers pointing functionality like a mouse. We fabricate an ExtensionClip prototype and conduct user studies. In the first study, we determine the threshold needed to detect the user»s touch on the back magnet. Next, we measure the precision and the selection speed possible with ExtensionClip.
In this work, we propose a simple yet effective method for synthesizing a pseudo-2.5D scene from a monocular video for mixed reality (MR) content. We also propose the ParaPara system, which applies this method. Most previously proposed systems convert real-world objects into 3D graphic models using expensive equipment; this is a barrier for individuals or small groups to create MR content. ParaPara uses four points in an image and their manually estimated distances to synthesize MR content by applying deep neural networks and simple image processing techniques to monocular videos. The synthesized content can be observed through an MR head-mounted display, and spatial mapping and spatial sound are applied to support the interaction between the real-world and MR content. The proposed system is expected to reduce the entry barriers to create MR content because it can create such content from a large number of previously captured videos.
Notifications in everyday virtual reality (VR) applications are currently realized by displaying generic pop-ups within the immersive virtual environment (IVE) containing the message of the sender. However, this approach tends to break the immersion of the user. In order to preserve the immersion and the suspension of disbelief, we propose to adapt the method of notification to the current situation of the user in the IVE and the messages' priority. We propose the concept of adaptive and immersive notifications in VR and introduce an open-source framework which implements our approach. The framework aims to serve as an easy-to-extend code base for developers of everyday VR applications. As an example, we implemented a messaging application that can be used by a non-immersed person to send text messages to an immersed user. We describe the concept and our open-source framework and discuss ideas for future work.
This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
This work presents the Dynamic Object Scanning (DO-Scanning), a novel interface that helps users browse long and untrimmed first-person videos quickly. The proposed interface offers users a small set of object cues generated automatically tailored to the context of a given video. Users choose which cue to highlight, and the interface in turn fast-forwards the video adaptively while keeping scenes with highlighted cues played at original speed. Our experimental results have revealed that the DO-Scanning arranged an efficient and compact set of cues, and this set of cues is useful for browsing a diverse set of first-person videos.
As information technologies (IT) are both, drivers of highly engaging experiences and sources of disruptions at work, the phenomenon of flow - defined as "the holistic sensation that people feel when they act with total involvement" [5, p. 36] - has been suggested as promising vehicle to understand and enhance user behavior. Despite the growing relevance of flow at work, contemporary measurement approaches of flow are of subjective and retrospective nature, limiting our possibilities to investigate and support flow in a reliable and timely manner. Hence, we require objective and real-time classification of flow. To address this issue, this article combines recent theoretical considerations from psychology and experimental research on the physiology of flow with machine learning (ML). The overall aim is to build classifiers to distinguish flow states (i.e., low and high flow). Our results indicate that flow-classifiers can be derived from physiological signals. Cardiac features seem to play an important role in this process resulting in an accuracy of 72.3%. Hereby, our findings may serve as foundation for future work aiming to build flow-aware IT-systems.
With the advent of decentralised energy resources (DERs), there has been increased pressure on classic grid infrastructure to manage non-dispatchable resources. In face of these challenges, microgrids provide a new way of managing and distributing DERs and are characterised as core building blocks of smart grids. In this context blockchain technology enables transaction-based systems for keeping track of energy flows in between producing and consuming parties. However, such systems are not intuitive and introduce challenges for the user»s understanding. In our current work, we introduce a user-centric approach to utilise a transactional data structure, providing transparency and understanding for when and from where electric energy is consumed. We present our approach for an engaging user interface and a preliminary study with feedback from solar installation owners and close with remarks on our future research plans.
Estimating mental workload from brain signals such as Electroencephalography (EEG) has proven very promising in multiple Human-Computer Interaction (HCI) applications, e.g., to design games or educational applications with adaptive difficulty, or to assess how cognitively difficult to use an interface can be. However, current EEG-based workload estimation may not be robust enough for some practical applications. Indeed, the currently obtained workload classification accuracies are relatively low, making the resulting estimations not fully trustable. This paper thus studies promising modern machine learning algorithms, including Riemannian geometry-based methods and Deep Learning, to estimate workload from EEG signals. We study them with both user-specific and user-independent calibration, to go towards calibration-free systems. Our results suggested that a shallow Convolutional Neural Network obtained the best performance in both conditions, outperforming state-of-the-art methods on the used data sets. This suggests that Deep Learning can bring new possibilities in HCI.
The position and orientation of a monitor affects users' behavior at their desk. In this study, we explored and designed six types of interactions between an actuated monitor and users to induce posture changes. We built a virtual monitor that simulates the motions of an actuated monitor and slowly moved in the opposite direction of unbalanced sitting postures. We conducted an explorative study with eight participants. The study showed participants' responses and step by step posture changes toward balanced sitting postures. As contribution, we share considerations for designing monitor actuation that induce posture intervention.
Cognition-aware systems acquire physiological data to derive implications about physical and mental states. Pupil dilation has recently attracted attention in the HCI community as an indicator for mental workload. The impact of mental workload on pupillary behavior has been extensively examined. However, systems making use of these measurements to alleviate mental workload have been scarcely evaluated. Our work investigates the expediency of task complexity adaption based on pupillary data in real-time. By conducting math tasks with different complexities, we calibrate a complexity adjustment system. In a pilot study (N=6), we evaluate the feasibility of changing task complexity using two different complexities. Our findings show less perceived mental workload during task complexity adaptation compared to presenting high task complexities only. We show the potential of pupil dilation as a valid metric for assessing mental workload as a modality for cognition-aware user interfaces.
Wearable Virtual Reality (WVR) is thought to offer a powerful approach in the treatment of subjects with Neurodevelopmental Disorders (NDD), e.g., to improve attention skills and autonomy. We propose to integrate WVR applications with wearable bio-sensors. The visualization of the information extracted from these devices, integrated with measures derived from interaction logs, would help therapists monitor the patient's state and attention levels during a WVR experience. The comparison of results along different sessions would facilitate the assessment of patients' improvements. This approach can be exploited to complement more traditional observation-based evaluation methods or clinical tests and can support evidence-based research on the effectiveness of Wearable VR for persons with NDD.
This paper presents Social MatchUP, a multiplayer Virtual Reality game for children with Neurodevelopmental Disorders (NDD). Shared virtual reality environments (SVREs) allow NDD children to interact in the same virtual space, but without the possible discomfort or fear caused by having a real person in front of them. Social MatchUP is a simple Concentration-like game, run on smartphones, where players should communicate to match up all the pairs of images they are given. Because every player can only interact with half of the pictures, but can see what his companion is doing, the game improves social and communication skill, and can be used also as a learning tool. A simple and easy-to-use customization tool was also developed to let therapists and teachers adapt the game context to the needs of the children they take care of.
We propose a novel system that displays various food textures by using jamming, a physical process by which particles become rigid when the density is increased. We utilize it as a simple and effective method to control the hardnesses and shapes of materials chewed in the mouth. We implemented a prototype of a pneumatic jamming system to verify the feasibility of adapting jamming for food texture displays. Experiment results confirmed that the prototype provides various hardnesses perceived in the mouth. Feedback comments from experiment participants indicated that the provided range of hardness was that between marshmallow softness and sherbet hardness.
3D food printing technologies offer a range of opportunities for HCI, yet so far applications have been limited. We report a survey exploring the attitudes of early adopters towards 3D food printing technology, with the aim of helping designers create successful applications for this technology.
With recent developments in 3D display interfaces, which are now capable of delivering rich and immersive visual experiences, a need has arisen to develop haptic-feedback technologies that can seamlessly be integrated with such displays -- in order to maintain the sense of visual realism during interactions and to enable multimodal user experiences. We present an approach to augmenting conventional and 3D displays with free-space haptic feedback capabilities via a large number of closely-spaced air-vortex-ring generators mounted along the periphery of the display. We then present our ongoing work on building an open-source system based on this approach that uses 16 vortex-ring generators, and show how it could serve as a multimodal interactive interface, as a research tool, and as a novel platform for creative expressions.
Research on wearable controllers has shown that body-worn cords have many interesting physical affordances that make them powerful as a novel input device to control mobile applications in an unobtrusive manner. With this paper, we want to extend the interaction and application repertoire of body-worn cords by contributing the concept of visually augmented interactive cords using state-of-the-art augmented reality (AR) glasses. This novel combination of simultaneous input and output on a cord has the potential to create rich AR user interfaces that seamlessly support direct interaction and reduce cognitive burden by providing visual and tactile feedback. As a main contribution, we present a set of cord-based interaction techniques for browsing menus, selecting items, adjusting continuous values & ranges and solving advanced tasks in AR. In addition, we present our current implementation including different touch-enabled cords, its data transmission and AR visualization. Finally, we conclude with future challenges.
Haptic feedback is an important part of virtual reality (VR), where it can increase the immersion and enjoyment. Within VR, research literature and products are mainly limited to vibrotactile feedback for the torso. We believe that additional types of haptic feedback around other areas of the body could potentially yield interesting VR experiences. Thus, we present HapticSerpent, which is a waist-worn robot capable of various haptic feedback on the torso, neck, face, arms and hands. We present our implementation specifications, followed by an initial evaluation to measure the distinguishability of taps applied to the torso. Also, we surveyed the acceptability of receiving feedback in different locations on the body. Participants noted an overall higher accuracy on the upper and sides of the torso and they generally disfavored haptic feedback in sensitive areas due to potential harm. Lastly, we discuss various research opportunities and challenges and present our future direction.
Recent Virtual Reality (VR) systems render highly immersive visual experiences, yet currently lack tactile feedback for feeling virtual objects with our hands and bodies. Shape Displays offer solid tangible interaction but have not been integrated with VR or have been restricted to desktop-scale workspaces. This work represents a fusion of mobile robotics, haptic props, and shape-display technology and commercial Virtual Reality to overcome these limitations. We present Mediate, a semi-autonomous mobile shape-display that locally renders 3D physical geometry co-located with room-sized virtual environments as a conceptual step towards large-scale tangible interaction in Virtual Reality. We compare this "dynamic just-in-time mockup" concept to other haptic paradigms and discuss future applications and interaction scenarios.
Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects. We studied how accurately visual lines and concave/convex surfaces let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, but takes longer than tracing virtual objects. Concave physical edges further improve accuracy due to their haptic guidance. Our findings provide initial metrics when designing AR sketching systems.
Well-designed notifications can successfully inform users of various devices and systems about their status; however, poorly designed ones can lead to negative user experiences. In this study, we conducted electroencephalography and behavioral experiments to understand how three musical parameters -- harmonic richness, pitch, and tempo -- influence users» auditory perception and attention-shifting. By understanding the relation between changes of musical parameters and users» cognitive and behavioral responses, the designers could better assess the possible cognitive effects caused by their manipulation of these parameters. We hope that the findings of this study can serve as cognitive implications for audio-notification designers and help them to explore the design space more efficiently.
SoundGlove is a tool for exploring everyday objects through a tangible, synesthetic experience of sound. The device facilitates this exploration by allowing the user to physically record and mix sounds by grabbing them out of the air and dropping them into a bowl. Sounds deposited into the bowl are mixed together and can be played back, enhancing abilities not only in observation, but in sound creation, as well. This paper describes the design and implementation of the SoundGlove system and considers potential applications, as suggested from early testing with users.
Because appearance-constrained robots lack expressiveness, human users often find it hard to understand their behavior and intentions. To address this, expressive lights are considered to be an effective means for such robots to communicate with people. However, existing studies mainly focus on specific tasks or goals, leaving the knowledge of how expressive lights affect people's perception still unknown. In this pilot study, we investigate such a question by using a Roomba robot. We designed two light expressions, namely, green and low-intensity (GL) and red and high-intensity (RH). We used open-ended questions to evaluate people's perception and interpretation of the robot, which showed different light expressions as a way to communicate. Our findings reveal that simple light expressions can allow people to construct rich and complex interpretations of a robot's behavior, and such interpretations are heavily biased by the design of expressive lights.
Previsualization (previs) is an essential phase in the design process of narrative media such as film, animation, and stage plays. Digital previs can involve complex technical tasks, e.g. 3D scene creation, animation, and camera work, which require trained skills that are not available to all personnel involved in creative decisions for the production. Interaction techniques such as virtual reality (VR) enable users to interact with 3D content in a natural way compared to classical 2D interfaces. As a first step, we developed VR based prototypes and performed an exploratory user study to evaluate how non-technical professionals from the film, animation, and theater domain assess the use of VR for previs. Our results show that users were able to interact with complex 3D scenes after a short phase of familiarization and rated VR for previs as useful for their professional work.
Recently, voice-based intelligent agents (VIAs), such as Alexa, Siri, and Bixby, are becoming popular. One of the interaction challenges with VIAs is that it is difficult to deliver rich information, experience, and meaning via voice-only communication channels. We propose interactive thermal augmentation to address VIA's interaction challenges. We developed a prototype system and conducted a user study to investigate the effects of thermal interaction in a VIA interaction context. The preliminary study results revealed that: 1) the thermal interface helped the participants to understand the information better; 2) the integration of heat and sound sensation provided an immersive and engaging experience; 3) a thermal stimulus worked as an additional feedback channel that supplements the voice interface. We discuss potentials and considerations when adopting thermal interaction to enrich people's experiences with a VIA.
Privacy issues can be difficult for end-users to understand and are therefore a key concern for information-sharing systems. This paper describes a deployment of fifteen Bluetooth-beacon-enabled 'creatures' spread across London's Queen Elizabeth Olympic Park, which initiate conversations on mobile phones in their vicinity via push notifications. Playing on the common assumption that neutral public settings promote anonymity, users' willingness to converse with personified chatbots is used as a proxy for understanding their inclination to share personal and potentially disclosing information. Each creature is linked to a conversational agent that asks for users' memories and their responses are then shared with other creatures in the network. This paper presents the design of an interactive device used to test users' awareness of how their information propagates to others.
During interventions, surgeons often need to review medical imaging data, e.g., CT scans. Usually, surgeons need to rely on an assistant to browse the images because of sterility requirements. Communication with a substitute operator is tedious and error-prone if the operator does not have an equal level of professional experience and might interrupt the workflow. We present a sensor-integrated shoe allowing surgeons to browse and manipulate 2D medical image data by foot movement. It is portable and wearable. The shoe uses an optical sensor taken from an off-the-shelf computer mouse for tracking the foot movements and an additional micro-switch to turn it on or off. We evaluated the performance of the shoe interface against a control condition with assistant together with ten surgeons in an empirical user study. Our results provide first indications for the effectiveness of a shoe interface in this application area.
Mobile virtual reality (VR) head-mounted displays (HMDs) are steadily becoming part of people's everyday life. Most current interaction approaches rely either on additional hardware (e.g. Daydream Controller) or offer only a liMassachusetts Institute of Technologyed interaction concept (e.g. Google Cardboard). We explore a solution where a conventional smartwatch, a device users already carry around with them, is used to enable short interactions but also allows for longer complex interactions with mobile VR. To explore the possibilities of a smartwatch for interaction, we conducted a user study in which we compared two variables with regard to user performance: interaction method (touchscreen vs inertial sensors) and wearing method (hand-held vs wrist-worn). We found that selection time and error rate were lowest when holding the smartwatch in one hand using its inertial sensors for interaction (hand-held).
Smartwatch users often report usability problems despite its handiness and convenience. Fingers often block the screen, and both hands are often required to operate the gadget. To address these issues, we present PairRing, a ring-shaped rotatable smartwatch controller. PairRing allows users to scroll up and down listed items by turning the ring on their index finger with their thumb. To determine the optimal ring shape and rotation speed, we designed and conducted a user study and report the results. We found that (1) users could perform tasks better with the angular ring prototype; (2) the rapid rotation speed was better suited for browsing ordered lists; and (3) overall, participants were positive about the feasibility of the prototype. We conclude with a discussion on the design implications of PairRing as well as its future applications.
Conversational agents are becoming more common, influenced by the success of Siri and Alexa. As such, new methods and associated challenges of designing for conversational systems are emerging. One factor, unique to conversational agents, that we need to account for as designers, is personality. People attribute personalities to conversational agents, strongly influenced by social expectations, whether or not a particular personality was designed deliberately. In the case of multi-lingual agents, this creates additional challenges: direct translations don't accommodate cultural variation. We discuss the design process and lessons learned from Radar Pace, a conversational coach that launched with support for five languages. We highlight successes and failures based on user study results and propose changes to the design process to avoid pitfalls for future agents.
The Business Intelligence (BI) paradigm is challenged by emerging use cases such as news and social media analytics in which the source data are unstructured, the analysis metrics are unspecified, and the appropriate visual representations are unsupported by mainstream tools. This case study documents the work undertaken in Microsoft Research to enable these use cases in the Microsoft Power BI product. Our approach comprises: (a) back-end pipelines that use AI to infer navigable data structures from streams of unstructured text, media and metadata; and (b) front-end representations of these structures grounded in the Visual Analytics literature. Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent to users. We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.
Building user empathy in a tech organization is crucial to ensure that products are designed with an eye toward user needs and experiences. The Pokerface program is a Google internal user empathy campaign with 26 researchers that helped more than 1500 employees-including engineers, product managers, designers, analysts, and program managers across more than 15 sites-have first-hand experiences with their users. Here, we discuss the goals of the Pokerface program, some challenges that we have faced during execution, and the impact we have measured thus far.
In traditional usability studies, researchers talk to users of tools to understand their needs and challenges. Insights gained via such interviews offer context, detail, and background. Due to costs in time and money, we are beginning to see a new form of tool interrogation that prioritizes scale, cost, and breadth by utilizing existing data from online forums. In this case study, we set out to apply this method of using online forum data to a specific issue---challenges that users face with Excel spreadsheets. Spreadsheets are a versatile and powerful processing tool if used properly. However, with versatility and power come errors, from both users and the software, which make using spreadsheets less effective. By scraping posts from the website Reddit, we collected a dataset of questions and complaints about Excel. Specifically, we explored and characterized the issues users were facing with spreadsheet software in general, and in particular, as resulting from a large amount of data in their spreadsheets. We discuss the implications of our findings on the design of next-generation spreadsheet software.
We present an educational activity for college students to think critically about the truthfulness of news propagated in social media. This activity utilizes TwitterTrails, a visual tool to analyze Twitter claims, events, and memes. This tool provides views such as a propagation graph of a story's bursting activity, and the co-ReTweeted network of the more prominent members of the audience. Using a response and reflection form, students are guided through these different facets of a story. The classroom activity was iteratively designed over the course of three semesters. Here, we present the learning outcomes from our final semester's evaluation with 43 students. Our findings demonstrate that the activity provided students with both the conceptual tools and motivation to investigate the reliability of stories in social media. Our contribution also includes access to the tool and materials to conduct this activity. We hope that other educators will further improve and run this activity with their own students.
This paper reflects on the development of a multi-sensory clubbing experience which was deployed during a two-day event within the context of the Amsterdam Dance Event in October 2016 in Amsterdam. We present how the entire experience was developed end-to-end and deployed at the event through the collaboration of several project partners from industries such as art and design, music, food, technology and research. Central to the system are smart textiles, namely wristbands equipped with Bluetooth LE sensors which were used to sense people attending the dance event. We describe the components of the system, the development process, the collaboration between the involved entities and the event itself. To conclude the paper, we highlight insights gained from conducting a real world research deployment across many collaborators and stakeholders with different backgrounds.
This project aimed at developing a mobile support system for children and teenagers in scoliosis bracing therapy. The system comprises multiple sensors measuring the individual wearing behavior and a smartphone-based application which serves as the system's user interface. The app has been developed following a user-centered design approach and by integrating participatory design and ethnographic methods. At the beginning, a special emphasis was placed on identifying the needs of the user group. Main operating functions as well as the interaction concept were iteratively refined and optimized by involving users in all stages of product development. For evaluation, we conducted a series of follow-up usability tests and a multi-day field survey. We report and analyze the challenges we were confronted with before and during of the product development process.
Computer-mediated communication technology is ubiquitous in today»s society. However, the design of these technologies often takes a screen-based approach and requires users to adopt new usage conventions. While these methods have been widely successful in helping individuals communicate, we take a step back in this paper and explore the design implications of a simpler tangible system for keeping in touch. This system consists of a pair of artificial electronic flowers, which connect and transmit information to each other. Our contribution is not in the actual implementation, but rather in the design implications that follow. In our modest evaluation we found participants using our system in informal, relaxed and sometimes novel ways.
In this paper we describe the practices used by alternate reality game (ARG) designers to engage fans with the issues and effects of global climate change under the scientific guidance of key non-profit organizations. Our multiple case study is based on three projects: Future Coast (2014), the Disaster Resilience Journal (2014) and Techno Medicine Wheel (2007 -- ongoing). Our analysis derives from each ARG designer»s interview and observations of their game»s narrative structure, postmortem. Findings provide HCI practitioners with a list of best practices related to the designer's use of narrative style and physical locations to support fan engagement. These practices emphasize the goals of non-profit organizations (NPO) through science communication utilizing popular media forms.
Prolonged sitting time in adults has become a major societal issue with far-reaching health, economic, and social consequences. The objective of this study is to reduce sedentary behaviour in office workers by integrating physical activity with work. In this case study, we present Workwalk, a concept to encourage and facilitate office workers to have a walking meeting. This idea arose by merging a traditional health research approach with an iterative design process. With this method, it was possible to integrate behaviour change techniques into an interaction design process effectively.
Central venous catheterization is a relatively common bedside medical procedure that involves the placement of a catheter into a patient»s internal jugular vein in order to administer medication or fluids. To learn this technique, medical students traditionally practice on training mannequins under the guidance of a clinical instructor. The objective of this project was to co-develop a standardized augmented reality solution for teaching medical students this procedure, which would enable them to practice independently and at their own pace. Following an iterative design and prototyping process, we compiled a comprehensive set of usability heuristics specific to augmented reality healthcare applications, used to identify unique usability issues associated with augmented reality software. This approach offers a better strategy to improve the usability of augmented reality system and increases the potential to standardize and render medical education more accessible. The benefits of applying augmented reality to simulated medical education come with heavy consequences in the event of poor learning outcomes. The usability of these systems is paramount to ensure the development of clinical competence is facilitated and not hindered.
Engineering productivity has been a popular topic of discussion from both an employee and employer perspective. Through rigorous experimentation and multi-methods approach, alongside subjective responses, we saw significant improvements in our engineering system. This paper reflects over a year of learnings on methods to increase response rates and designing surveys for monitoring and impacting both efficiency and satisfaction for internal developers. Measuring these concepts became important as we were making significant changes to our engineering system. These methods can be applied in other contexts to increase response rates and monitor satisfaction overtime.
A key factor that a typical Brain-Computer Interface (BCI) demands of operators is concentration of mind. A focused mind is more effective in controlling a BCI. One way to improve mental concentration is meditation. Past studies have demonstrated the positive effect of long-term meditation (month-long to year-long) on BCI performance. In this study, we examined the impact of short-term meditation (10-15 minutes long). We explored two guided meditation techniques of mindfulness: 1) open monitoring, and 2) mindful breathing. A brain-controlled toy helicopter was used as a testbed. Five participants volunteered in the experiment in which they were exposed to an emotional stressor of a violent video gameplay followed by operating the helicopter through mental concentration. A brief meditation intervention was introduced between the video gameplay and the BCI task. The results reveal that, in comparison to the control trial, participants were able to lift the helicopter for longer time durations after meditation. Furthermore, this performance improvement was significantly greater than the participants achieved by sitting idle.
We present a case study of Mooqita, a platform to support learners in online courses by enabling them to earn money, gather real job task experiences, and build a meaningful portfolio. This includes placing optional additional assignments in online courses. Learners solve these individual assignments, provide peer reviews for other learners, and give feedback on each review they receive. Based on these data points teams are selected to work on a final paid assignment. Companies offer these assignments and in return receive interview recommendations from the pool of learners together with solutions for their challenges. We report the results of a pilot deployment in an online programming course offered by UC BerkeleyX. Six learners out of 158 participants were selected for the paid group assignment paying $600 per person. Four of these six were invited for interviews at the participating companies Crowdbotics (2) and Telefonica Innovation Alpha (2).
This case study describes the creation of a head-mounted display virtual reality exergame program for promoting physical exercise for people with mild cognitive impairment (MCI), namely people with early-stage dementia. We engaged in an iterative participatory design process with kinesiologists, recreational therapists, and people with MCI prior to pilot-testing a prototype program with three persons with MCI. The test participants engaged in the exergame, were able to do the exercises, and their feedback was very positive. Engaging with professionals and people with dementia throughout the design process was very beneficial to creating a usable and engaging design as well as identifying areas that could be further improved. In conclusion, the approach illustrated through this case study resulted a new way for older adults with MCI to engage in physical activities that is fun and tailored to their abilities. The next phase in our research is to evaluate the exergame against comparable human-guided movements.
This case study outlines the process of the service prototype development for enhancing proactive rescue work of the Finnish rescue department (Kupela). In this paper we present our development challenge, co-development methodology, prototype development process and user research activities conducted in the wild. The aim of this research is to study how novel technologies can be used to enhance contextual understanding and situational awareness in predictive rescue work. By utilizing rich data sources, coherent view of the rescue situation can be established, resulting in improved safety and better experience for all stakeholders of the project; that is tactical and operative rescue workers and customers of the rescue service.
For many, search engines like Google and Bing offer excellent facilities for satisfying information needs. However, a class of needs not currently addressed by generic search engines is investigative search, which has massive potential for using adaptive technology for social good. In this case study, we describe the challenges of investigative search in the online sex trafficking domain, along with the insights gained from user feedback in using a real-world investigative search system developed in our group.
Insights from cultural dimensions can be translated into UI guidelines to make technology products locally relevant. Such design suggestions originate from academia but little insight exists in how this knowledge is being applied and tested with global industry products, indicating a gap between research and practice. We contribute two case studies, developed based on work experience and interviews with Product Owners for e-commerce localization in Booking.com and Managing Editors involved in the development for the AI Cortana, Microsoft's intelligent assistant, that provide insight into the efforts to develop or optimize a locally relevant product. These case studies contribute to the academic discussion of the use and usefulness of cultural dimensions by examining the value of existing academic research when applied in specific circumstances. We found that cultural dimensions are used for idea generation, to inform design interventions at a global scale, and to justify personal experiences and intuitions. We also found that designers in industry rely more on personal experiences and knowledge than on the use of cultural dimensions. We discuss the complexities of optimizing for local markets and how insights from the case studies can help to further close the gap between industry and research.
Simply by paying with credit cards, consumers spend more than they would with cash due a difference in payment transparency. We introduced a mobile application to test the efficacy of personalized feedback interventions to help people save money by lowering credit card expenses, and ultimately, to guide them towards a more responsibly use of digital forms of payment. For our large-scale field study (N>1'000 individuals), we cooperated with a credit card issuer to be able to test the effectiveness of our mobile-mediated interventions on real-world credit card transactions over a period of three months. This paper summarizes the main challenges we encountered, and the taken measures that enabled us to leverage highly sensitive data for research, which serve as guidelines for future industry-facing field studies.
In this case study three designers supported by multiple stakeholders created a pair of fully personalized printed high heel shoes in a period of two months for a single user. The shoes are made with soft and flexible materials for dynamic fit and use. The shoes are not only uniquely formed to the user's feet but the geometry of the material is designed to support and flex with the movement of each foot. These shoes utilize a 4D printing approach in the way they are made to fit the user while they move and change. Designing a shoe to such a degree represents a form of Ultra Personalization. This case study of an ultra personalized approach addresses the negotiation of key design considerations: aesthetics, comfort, robustness, balance and temperature. The findings inform digital fabrication design, software, and tools for designers.
An unprecedented 65 million people around the world are forcibly displaced. As stakeholders call for technological innovation to address the realities of the global migration crisis, HCI researchers in and outside of academia engage with the issues faced by humanitarian responders, refugees and communities. Our exploratory study at UNHCR Za'atari Syrian Refugee Camp in Jordan highlights the creative ways young people co-opt technology to perform information work. We collected data through a survey, diaries, and observations. Findings offer important insights. First, hacking at Za'atari is highly gendered, with only boys observed engaging in the activity. Second, the range of young hackers' activities indicate the following: a) youth help their family and community through technology; b) benefits include getting paid and feeling empowered by their role of technology experts; and c) a connected learning environment emerges. Finally, the physical location where youth are co-opting technology is important in building capacity.