Professor Danielle Wood leads the new Space Enabled Research Group at the MIT Media Lab. The Space Enabled Research Group advances justice in Earth's complex systems using designs enabled by space. Space Enabled sees opportunities to advance justice by increasing access to space technology to development leaders around the world and by applying space technology in support of the Sustainable Development Goals, as curated by the United Nations. There are six space technologies that have been contributing to the Sustainable Development Goals for decades, but barriers remain that limit their impact. These technologies include satellite earth observation, satellite communication, satellite positioning, microgravity research, technology transfer and increasing research infrastructure. The Space Enabled Research Group implements projects with development leaders at the multi-lateral, national and local scale to apply space technology in support of their initiatives. During these projects, Space Enabled implements an integrated design process that includes techniques from engineering design, art, social science, complex systems modeling, satellite engineering and data science. During this talk, Prof Wood will discuss the role of space to spur innovation and development and give examples of projects pursued by the Space Enabled Research Group.
Many researchers and evangelists argue that V.R. is fundamentally more "moving" than other media because of users' visual immersion in navigable worlds and their empathic identification with another visual perspective. (see Rubin, Bailenson). This essay will analyze women of color's labor as virtual reality's documentary subjects whose digital presence and hospitality within war-torn, emiserated, and inhospitable scenes such as a Lebanese refugee camp, a favela, and a cucumber farm enables a fantasy of virtuous empathy on the part of the viewer. Virtual reality's painstakingly created virtuous identity as the "empathy machine" satisfies desires for prosocial feelings of compassion, empathy, and identification that replace encounters with politics, unwelcome bodies, and protest. Global South women of color, non-white refugee women, and trans women are all virtual objects of identification in virtual reality and video games, platforms that are inextricably connected yet carry very different moral and ethical connotations.
"Late twentieth-century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally designed, and many other distinctions that used to apply to organisms and machines. Our machines are disturbingly lively, and we ourselves are frighteningly inert."" - Donna Haraway. Simians, Cyborgs, and Women: The Reinvention of Nature. The boundaries that Haraway lamented in 1990 were further eroded by the World Wide Web, wireless networks, smartphones, and today by social media, fake news, claims about Artificial Intelligence and an impending "Singularity". Are all boundaries truly illusionary? Or can we question illusions of illusion by actively asserting boundaries between ourselves and our machines? Artistic collaborators Shlain and Goldberg will describe how their art projects and experiences with technology are leading them to rediscover old barricades.
Stories on the home materialize in many different ways. Simple design scenarios of more efficient smart homes exist alongside more articulated design fictions narrating complex domestic futures. IoT toolkits can be used in co-design to narrate design stories together with people. However, there is little attention on the stories captured in the co-creation process. This paper presents a framework describing, comparing, and assessing design stories. We illustrate the framework through the comparison of the design stories captured from three divergent IoT toolkits in co-design workshops. Three dimensions characterize the design stories emerging from our inquiry: complexity (resolution and scope), likeliness (conceivability and feasibility), and implications (acceptability and consequentiality). This framework contributes towards understanding which properties of IoT toolkits support the emergence of what kind of design story. Our findings help designers to frame expectations when using IoT toolkits and to conceive IoT toolkits that support underexplored qualities of design stories.
While past work has admirably supported crowd workers in improving their work performance, we argue that there is also value in designing for enjoyment untied from work outcomes--- what we call "tangential play.'' To this end, we present Turker Tales, a Google Chrome extension that uses tangential play to encourage crowd workers to write, share, and view short tales as a side activity to their main job on Amazon Mechanical Turk (MTurk). Turker Tales introduces a layer of playful narrativization atop typical crowd work tasks in order to alter workers' experiences of those tasks without aiming to improve work efficiency or quality. Using speed-dating (N=12) and a pilot test (N=150) to inform our design, we deployed Turker Tales over one week with 171 participants, receiving 1,096 tales and 1,527 ratings of those tales. We found that our system of tangential play brought to light underlying conflicts (such as unfair working conditions), and provided a space for participants to reveal aspects of themselves and their shared experiences. Through Turker Tales, we critically reflect on the roles of researchers, designers, and requesters in crowd work, and the ethics of incorporating play into crowd work, and consider the implications of the paradigm we introduce both as a method of research through design and as a direction for design to support crowd workers.
This paper aims to elevate stories of design by people with disabilities. In particular, we draw from counter-storytelling practices to build a corpus of stories that prioritize disabled people as contributors to professional design practice. Across a series of workshops with disabled activists, designers, and developers, we developed the concept of biographical prototypes: under-recognized first-person accounts of design materialized through prototyping practices. We describe how the creation of such prototypes helps position disabled people as central contributors to the design profession. The artifacts engendered an expanded sense of coalition among workshop participants while prompting reflection on tensions between recognition and obligation. We end by reflecting on how the prototypes-and the practices that produced them-complement a growing number of design activities around disability that reveal complexities around structural forms of discrimination and the generative role that personal accounts may play in their revision.
The experience of agency has been centralized in interactive digital storytelling research. Current approaches to agency place the actions of a reader into direct conflict with the authorial intentions of an interactive storyteller by privileging the authorial pleasures that come by interacting with the story's plot. In this paper we present the design and results of an empirical study of Shiva's Rangoli, a tangible interactive narrative system that supports meaningful interactions with the narratives that are decoupled from character actions and plot outcomes. Shiva's Rangoli allows readers to change the emotional, presentational, and aesthetic context of the story by giving them control over the ambience of the installation space where it is experienced, through a tangible interface. From our study, we identify three interaction styles or strategies undertaken by readers that reframe readers as Story Supporters, Meaning Makers, and Story Controllers to explore the pleasures of bounded agency within interactive storytelling.
Incorporating health personas for older adults into design processes can help designers accurately represent older adults by evoking empathy, facilitating consideration of health issues and needs, and reducing stereotype reliance. Toward this goal, we create a two-level quantitative methodology for constructing persona skeletons from imbalanced datasets. We demonstrate our methodology by constructing a set of 4 care-management personas for U.S. older adults via filtering and analyzing demographic, behavior risk factor, and chronic health conditions from 170,704 randomly sampled older adults in a national survey with imbalanced coverage (i.e. between unconditional & conditional questions). We obtain 4 cluster centers for unconditional questions through K-means and iteratively dropping irrelevant features. Within each cluster, we analyze selected respondents for conditional questions. We synthesize results into persona narratives and provide a weighting scheme to quantitatively measure each persona's significance. We contribute a robust persona construction methodology, here applied towards representing older adults.
There has been an increased interest in researching the beneficial effects of everyday sounds, other than music on people with dementia. However, to turn this potential into concrete design applications, a qualitative understanding of how people engage with sound is needed. This paper presents the outcomes of three workshops, exploring the personal experiences evoked by soundscapes of people in early to mid-stages of dementia. Using the dementia soundboard, we provide key insights into how sounds from everyday life triggered personal associations, memories of the past, emotional responses, and the sharing of experiences. Furthermore, we identified several design considerations and practical insights for sound-based technologies in the context of dementia care. This paper sets out a path for further design-research explorations and development of concrete sound-based interventions, for enriching the everyday lives of people with dementia.
Research in positive psychology indicates that sustained well-being is more determined by our actions than by our possessions. Products' contribution to well-being may thus be grounded in their potential to support well-being-enhancing activities rather than in their material value. In a laddering study, we investigated how products shape a range of well-being determinants, including activities, and well-being outcomes. Following a hierarchical structure, seven product experience qualities, six motivations, and seven activities were empirically found to be linked to long-term well-being. We describe these ingredients for sustained well-being in further detail and provide actionable guidance on how to address them by means of design. As the majority of product-supported long-term well-being outcomes were mediated by activities, we propose activities as most promising starting point in design for sustained well-being.
Poor indoor air quality (IAQ) can affect health and cognitive performance prior to users becoming aware of the declining air quality. Yet office occupants rarely have access to IAQ information upon which to base their ventilation decisions. This paper details the design and deployment of a situated IAQ display as a probe to explore ventilation and building operation practices when IAQ information is made available. Based on deployments in 11 naturally ventilated offices, we present an analysis of how reflection and sense making around IAQ can inform interactions with buildings. We suggest displays that are locally situated, non-disruptive and visualise the effects of poor IAQ with human analogies may hold potential for engaging office occupants with IAQ. We highlight how ambient displays represent a stepping-stone towards more informed interactions which can improve air quality and cognitive performance, and how IAQ feedback may usefully contribute to alternative HCI research agendas such as Human-Building Interaction.
The design of a new technology entails the materialisation of values emerging from the specific community, culture and context in which that technology is created. Within the domain of musical interaction, HCI research often examines new digital tools and technologies which can carry unstated cultural assumptions. This paper takes a step back to present a value discovery exercise exploring the breadth of perspectives different communities might have in relation to the values inscribed in fictional technologies for musical interaction. We conducted a hands-on activity in which musicians active in different contexts were invited to envision not-yet-existent musical instruments. The activity revealed several sources of influence on participants' artefacts, including cultural background, instrumental training, and prior experience with music technology. Our discussion highlights the importance of cultural awareness and value rationality for the design of interactive systems within and beyond the musical domain.
The experience of sound may be seen as fleeting or ephemeral, as it naturally disperses through space in waveforms unless recorded by media. We designed muRedder to reinstate the ephemerality of sound by shredding a song ticket that embeds a sound source while playing the song simultaneously. In this study, we explored ordinary music listening activities by turning intangible music content into tangible artefacts, making the music unable to be replayed, and representing the sound-fading process by shredding the ticket. We conducted a field study with 10 participants over seven days. The results showed that muRedder enabled users to focus solely on the music content and to actively find times to enjoy the music. We also found that limitedness of the media draws prudent decision in selecting music. By showing the process of consuming the invisible auditory content in a way that is tangibly perceivable, our findings imply new value for slow consumption of digital content and musical participation in public spaces.
This paper explores the process by which designers come to terms with an unfamiliar and ambiguous sensor material. Drawing on craft practice and material-driven interaction design, we developed a simple yet flexible sensor technology based on the movement of conductive elements within a magnetic field. Variations in materials and structure give rise to objects which produce a complex time-varying signal in response to physical interaction. Sonifying the signal yields nuanced and intuitive action-sound correspondences which nonetheless defy easy categorisation in terms of conventional types of sensors. We reflect on a craft-based exploration of the material by one of the authors, then report on two workshops with groups of designers of varying background. Through examining the objects produced and the experience of the participants, we explore the tension between tacit and explicit understanding of unfamiliar materials and the ways that material thinking can create new design opportunities.
In this pictorial, we present the ListeningCups: a set of 3D printed porcelain cups embedded with datasets of everyday ambient sounds. During a one-week pilot project, a ceramic artist and an interaction design researcher collaborated to explore meaning making around everyday data (sound in our case). We developed a workflow to capture data, prepare datasets, transcribe data from decibels to G-code, and create a set of 3D printed porcelain cups which represent this data in a textural and tactile form. We discuss how our work also included aesthetic investigative practices as well as data accidents. We conclude by contributing two concepts-data tactility and data stories-that can serve as starting points for designers, artists, or researchers interested in the intersection of materiality, data, fabrication, and ceramics.
Leveraging knowledge generated in design activities for subsequent design and research purposes is an important yet challenging task due to the complex and messy nature of design work. In this paper, we discuss opportunities and challenges for documentation in design processes to support the transformation of design knowledge across different levels of abstraction. We argue for an activity-centred approach to underscore the construction of intermediate forms of knowledge, where immediate responses to design activities inform mediated representations of design work. The relation between immediate and mediated knowledge is explored through a case study of Co-notate, a research prototype enabling automatic synthesis of audio/video recordings and textual annotations simultaneously captured during design activities. We argue that real-time annotated design documentation limits the risk of recorded activities being misinterpreted, or simply disregarded, and we identify and discuss potentials for employing real-time technology to inform retrospective analysis.
We report on findings from seven design workshops that used ideation and sketching activities to prototype new situated visualizations - representations of data that are displayed in proximity to the physical referents (such as people, objects, and locations) to which the data is related. Designing situated visualizations requires a fine-grained understanding of the context in which the visualizations are placed, as well as an exploration of different options for placement and form factors, which existing methods for visualization design do not account for. Focusing on small displays as a target platform, we reflect on our experiences of using a diverse range of sketching activities, materials, and prompts. Based on these observations, we identify challenges and opportunities for sketching and ideating situated visualizations. We also outline the space of design activities for situated visualization and highlight promising methods for both designers and researchers.
Many User Experience (UX) practitioners face organizational barriers that limit their ability to influence product decisions. Unfortunately, there is little concrete knowledge about how to systematically overcome these barriers to optimize UX work and foster a stronger organizational UX culture. This paper introduces the concept of User Experience Capacity-Building (UXCB) to describe the process of building, strengthening, and sustaining effective UX practices throughout an organization. Through an integrated literature review of relevant HCI and capacity-building research, this paper defines UXCB and proposes a conceptual model that outlines the conditions, strategies, and outcomes that define a UXCB initiative. Five areas of future research are presented that aim to deepen our understanding of UXCB as both a practice and an area of scholarship.
The adoption of design fiction into design research has recently been expanding within the HCI community. Design fiction workshops have fruitfully facilitated users and researchers discussing and creating future technologies by exposing differing viewpoints. Yet, most scholarship focuses on the ostensibly successful outputs of these workshops. It remains unclear exactly what sort of interaction dynamics are instigated by design fiction in collaborative design. How might design fiction affect what we consider in design, and how is this reflected in the ensuing design? To fill this gap, our study examines design fictions across five workshops where diverse participants created futuristic autobiographies, a method to elicit values, and built diegetic prototypes both individually and collaboratively. We detail their design processes and unpack three kinds of soft conflicts that arose between participants and allowed them to bring up and discuss differing values regarding technology in society. Reflecting on our workshops, we discuss their implications on how one might employ design fiction in collaborative design.
Spatial abilities are grounded in the way we interact with and understand the world through our physical body. However, existing spatial ability training materials are largely paper- or screen-based; they rarely engage or encourage the use of the body. Tangible and Virtual Reality (VR) technologies provide opportunities to re-imagine designing for spatial ability. We present a game-based system that combines embodiment in VR with tangible interfaces, and is designed around a specific spatial ability known as penetrative thinking. This spatial ability is the capacity to imagine the internal structure of objects based on external cues. This ability is important in areas like design, geosciences, medicine, and engineering. We describe the iterative design, implementation, and testing of our system, including the user study of the final design. The user evaluation results showed that participants had positive experiences when they solved the penetrative thinking puzzles in the tangible VR system.
Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.
In the emerging ecology of commercial social VR, avatars that serve to represent individuals within these multi-user virtual worlds are at the heart of the embodied social experience. Current industry approaches to avatars in social VR applications vary widely, and the (sometimes tacit) design knowledge acquired by those who created these platforms has much to offer research in HCI. In this paper, we describe current design practices, and reflect on the design approaches that characterize avatars and avatar systems in this emerging commercial sector. To investigate design approaches to avatar systems and their impact on communication and interaction with people within this medium, we interviewed industry experts associated with a range of platforms including Rec Room, AltspaceVR, High Fidelity, VRChat, Anyland, and Mozilla Hubs. In analyzing the ways that design choices shape embodied experience, we map design approaches to avatar systems in this evolving landscape and make preliminary claims about the impact of these varying design approaches.
Many studies have proposed different ways of supporting flying in embodied virtual reality (VR) interfaces with limited success. Our research explores the usage of a user's lower body to support flying locomotion control through a novel "flexible perching" (FlexPerch) stance that provides user with leg moving ability while sitting. We conducted an observational study exploring participants' preferred usage of the FlexPerch stance, and a mixed-method study comparing the same flying experience with existing sitting and standing stances. Our results show that FlexPerch markedly increased participants' feelings of flying. However, people may not like "flying" when they really can - the freedom, feeling of floating, and novelty contributing to this sensation can also mean more effort and feeling unsafe or unfamiliar. We suggest that researchers studying VR flying interfaces evaluate the feeling of flying, and raise design considerations to use stances like FlexPerch to elicit feelings of flying and stimulation.
Dancer-teacher communication in a ballet class can be challenging: ballet is one of the most complex forms of movements, and learning happens through multi-faceted interactions with studio tools (mirror, barre, and floor) and the teacher. We conducted an interview-based qualitative study with seven ballet teachers and six dancers followed by an open-coded analysis to explore the communication challenges that arise while teaching and learning in the ballet studio. We identified key communication issues, including adapting to multi-level dancer expertise, transmitting and realigning development goals, providing personalized corrections and feedback, maintaining the state of flow, and communicating how to properly use tools in the environment. We discuss design implications for crafting technological interventions aimed at mitigating these communication challenges.
This paper presents a case study of a long-term collaboration between a physical performance company and interactive digital artists. The collaboration has resulted in the creation of five major performance works which have toured internationally over several years. We argue that the interactive systems can be considered a 'material' which changes over time, shaping performer actions and being shaped by them in return. Based on detailed interviews with key stakeholders and our own personal reflections, we have identified several 'trajectories' that have evolved over the duration of each individual production and the entire body of work. These trajectories address a number of perspectives including the way performers interact with the system, the relationship between the dramaturgy and the interaction palette and the way the stakeholders conceive of the interactive system. The evolution of the technology itself has also been examined in terms of aesthetic capability, performance robustness, operational cost and complexity across the entire duration of the collaboration.
This paper reports on the first year of a three-year-long co-creation project with older adults. We focus our analysis on one particular workshop in which participants stopped designing and began to think about promoting the app we were co-creating. The workshop proved uniquely important for examining assumptions we had made about how and why the co-creation process would be successful. This paper concedes flaws in these assumptions and in the execution of the methodology as a way of illuminating dynamics that act on research projects in ways that are antithetical to effective co-creation. Reporting on the unexpected results of our participant engagements, we reveal new insights into the challenges in executing co-creation methodology.
There is growing interest in technologies that allow older adults to socialise across geographic boundaries. An emerging technology in this space is social virtual reality (VR). In this paper we report on a series of participatory design workshops that involved extended in-depth collaboration with 22 older adults (aged 70-81), that aimed to understand their views on the types of social VR experiences that they saw as being of value to older adults. This process culminated in a reminiscence-based social VR concept. Our study identifies: participants' ideas about the types of social VR experiences they found appealing; the potential for social VR as a powerful reminiscence tool; how social VR might be used as a tool to challenge ageing stereotypes and promote healthy ageing. Reflecting on the design process, we discuss how the diverse participant groups and the complexities involved in mediating between designers, the technical team, and participants could inform future design work.
Drawing upon multiple disciplines, avant-garde fashion-tech teams push the boundaries between fashion and technology. Many are well trained in envisioning aesthetic qualities of garments, but few have formal training on designing and fabricating technologies themselves. We introduce Mannequette, a prototyping tool for fashion-tech garments that enables teams to experiment with interactive technologies at early stages of their design processes. Mannequette provides an abstraction of light-based outputs and sensor-based inputs for garments through a DJ mixer-like interface that allows for dynamic changes and recording/playback of visual effects. The base of Mannequette can also be incorporated into the final garment, where it is then connected to the final components. We conducted an 8-week deployment study with eight design teams who created new garments for a runway show. Our results revealed Mannequette allowed teams to repeatedly consider new design and technical options early in their creative processes, and to communicate more effectively across disciplinary backgrounds.
The increasing complexity of cyber-physical systems poses special challenges for users to be able to appropriate and apply such technologies within their practice. Classic tools to support appropriation have usually taken the form of written manuals or video tutorials. However, recent research regarding appropriation infrastructures and sociable technologies has suggested that appropriation support functionality can be integrated directly into software and cyber-physical systems. The problem confronting this kind of support is the adequate visualization of the information and the provision of user interfaces that could offer the necessary basis for appropriation. Based on the example of 3D printing, we examine how projection mapping-as an innovative form of visualization-can be used as a user interface for hardware-related appropriation support. Through reflections upon the design and evaluation of a projection-based 3D-printer system, we provide insights that extend the notion of appropriation support to encompass projection mapping and that can contribute to the future development of projection-based human-machine interfaces.
The Tilting Bowl is a ceramic bowl that unpredictably but gently tilts multiple times daily. This pictorial reports on the crafting of the electronics of the Tilting Bowl within the concept of a research product [10]. From this perspective, the seemingly simple task of making a bowl tilt holds unique challenges and demands - especially as a research product that is deployed in everyday settings for lengthy periods of time. We highlight electronic design challenges that came up in three processes of making the Tilting Bowl: the tilting mechanism, hardware integration of electronics and power management. Lastly, we offer three suggestions for designing electronics for research products.
This paper discusses how a networked object in the form of a small robot designed to mediate experiences of care, social connectedness, and intimacy, was used by adolescents with Myalgic Encephalomyelitis, a condition that reduces their normal functioning, including the ability to socialize. A study with nine adolescents, each using the robot for about a year in average, revealed that it was largely effective at mediating their everyday experiences of relatedness, triggering productive new habits and social practices. We interpret these findings to propose a set of strategies for designing technologies that support relatedness while requiring minimal interactivity and engagement. Balance, extension-of-self, coolness, and acts-of-care, in addition to commonly used physicalness, expressivity and awareness, enable the robot to extend the adolescents' ability to relate to others, people and animals.
Autism is a complex, life-long condition that manifests itself in unique ways in each person. Due to the complexity of the condition along with not having efficient and immediate social support, parents with autistic children often seek for and rely upon the information generated by the community (parents, caregivers, autistics and experts) on online platforms. We look into what parents of autistic individuals discuss on an online platform in Turkey, how they practice autism online and why those practices are important or relevant. Our findings show how parents cope with understanding and defining autism, and how they seek for empowering each other, and managing the everyday collectively under a dominant medical discourse around autism in Turkish context. Based on our findings, we extend the existing knowledge on collective and alternative ways of re-defining autism as lived experience and introduce recommendations on how those strategies can be integrated to design.
Co-reading (when parents read aloud with their children) is an important literacy development activity for children. HCI has begun to explore how technology might support children in co-reading, but little empirical work examines how parents currently co-read, and no work examines how people with visual impairments (PWVI) co-read. PWVIs' perspectives offer unique insights into co-reading, as PWVI often read differently from their children, and (Braille) literacy holds particular cultural significance for PWVI. We observed discussions of co-reading practices in a blind parenting forum on Facebook, to establish a grounded understanding of how and why PWVI co-read. We found that PWVIs' co-reading practices were highly diverse and affected by a variety of socio-technical concerns - and visual ability was less influential than other factors like ability to read Braille, presence of social supports, and children's literacy. Our findings show that PWVI have valuable insights into co-reading, which could help technologies in this space better meet the needs of parents and children, with and without disabilities.
We present our participatively and iteratively designed 3D audio-tactile globe that enables blind and low-vision users to perceive geo-spatial information. Blind and low-vision users rely on learning aids such as 2D-tactile graphics, braille maps and 3D models to learn about geography. We employed participatory design as an approach to prototyping and evaluating four different iterations of a cross-sensory globe that uses 3D detachable continents to provide geo-spatial haptic information in combination with audio labels. Informed by our participatory design and evaluation, we discuss cross-sensory educational aids as an alternative to visually-oriented globes. Our findings reveal affordances of 3D-tactile models for conveying concrete features of the Earth (such as varying elevations of landforms) and audio labels for conveying abstract categories about the Earth (such as continent names). We highlight the advantages of longitudinal participatory design that includes the lived experiences and DIY innovations of blind and low-vision users and makers.
Biosensors-devices that sense the human body-are increasingly ubiquitous. However, it is unclear how people evaluate the risks associated with their use, in part because it is not well-understood what people believe these sensors can reveal. In this study, participants ranked biosensors by how likely they are to reveal what a person is thinking and feeling. We report quantitative and qualitative results of two survey-based studies, one on Mechanical Turk workers (n=100), and one on participants in a longitudinal self-tracking study (n=100). Our findings imply that, in the absence of information about particular sensing technologies, people rely on existing beliefs about the body to explain what they might reveal. Highlighting mismatches between perceived and actual technical capabilities, we contribute recommendations for designers and users.
Artificial intelligence (AI) technologies are complex socio-technical systems that, while holding much promise, have frequently caused societal harm. In response, corporations, non-profits, and academic researchers have mobilized to build responsible AI, yet how to do this is unclear. Toward this aim, we designed Judgment Call, a game for industry product teams to surface ethical concerns using value sensitive design and design fiction. Through two industry workshops, we found Judgment Call to be effective for considering technology from multiple perspectives and identifying ethical concerns. This work extends value sensitive design and design fiction to ethical AI and demonstrates the game's effective use in industry.
Society is affected by the consequences of data collection, and there are trends visible in law, the public debate and technology that could make a privacy-conscious future possible. We study how to avoid data collection from the perspective and the role of design, to provide a starting point for new developments in this context. We do so by presenting a portfolio that exemplifies a range of possible design contributions. We show how to design smart products for retail and smart home while avoiding data collection, how to convince clients through design, and how to use design to spread awareness. We present design notions and reflections that stem from this portfolio for the synthesis of new designs, that further explore the potential of design in practice that affords the trend of privacy.
Facial analysis applications are increasingly being applied to inform decision-making processes. However, as global reports of unfairness emerge, governments, academia and industry have recognized the ethical limitations and societal implications of this technology. Alongside initiatives that aim to formulate ethical frameworks, we believe that the public should be invited to participate in the debate. In this paper, we discuss Biometric Mirror, a case study that explored opinions about the ethics of an emerging technology. The interactive application distinguished demographic and psychometric information from people's facial photos and presented speculative scenarios with potential consequences based on their results. We analyzed the interactions with Biometric Mirror and media reports covering the study. Our findings demonstrate the nature of public opinion about the technology's possibilities, reliability, and privacy implications. Our study indicates an opportunity for case study-based digital ethics research, and we provide practical guidelines for designing future studies.
In order to investigate how a VR study context influence participants' User Experience responses of an interactive system an UX evaluation of the same in-vehicle systems was conducted in the field and in virtual reality. The virtual environment featured a virtual road scene and an interactive in-car environment, paired with a physical set-up containing a table-mounted steering wheel and a touch-sensitive panel. The VR system enabled a high estimation of presence and focus, however participants voiced less affect in the virtual setting and had difficulty in separating judgments of the VR experience from the UX of the in-vehicle systems. No significant differences in UX questionnaire data were identified between VR and the field, but there were correlations between rated presence in the VR system and UX ratings, especially for reported stimulation. Based on the lessons learned, a number of methodological and technological consequences are recommended in the paper, such as the need for more dynamic movement behaviour, improved resolution of graphics of the virtual vehicle and introducing the test leader as visually present in the virtual environment.
AV-pedestrian interaction will impact pedestrian safety, etiquette, and overall acceptance of AV technology. Evaluating AV-pedestrian interaction is challenging given limited availability of AVs and safety concerns. These challenges are compounded by "mixed traffic" conditions: studying AV-pedestrian interaction will be difficult in traffic consisting of vehicles varying in autonomy level. We propose immersive pedestrian simulators as design tools to study AV-pedestrian interaction, allowing rapid prototyping and evaluation of future AV-pedestrian interfaces. We present OnFoot: a VR-based simulator that immerses participants in mixed traffic conditions and allows examination of their behavior while controlling vehicles' autonomy-level, traffic and street characteristics, behavior of other virtual pedestrians, and integration of novel AV-pedestrian interfaces. We validated OnFoot against prior simulators and Wizard-of-Oz studies, and conducted a user study, manipulating vehicles' autonomy level, interfaces, and pedestrian group behavior. Our findings highlight the potential to use VR simulators as powerful tools for AV-pedestrian interaction design in mixed traffic.
With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a Pointcloud, showing a 3D-Model, and showing an Image Overlay of the passerby. Our results show that while an Image Overlay and a 3D-Model are the fastest representations to spot passersby, the 3D-Model and the Pointcloud representations were the most accurate.
Head-mounted displays (HMDs) are being used for VR and AR applications and increasingly permeate our everyday life. At the same time, a detailed understanding of interruptions in settings where people wearing an HMD (HMD user) and people not wearing an HMD (bystander) is missing. We investigate (a) whether bystanders are capable of identifying when HMD users switch tasks by observing their gestures, and hence exploit opportune moments for interruptions, and (b) which strategies bystanders employ. In a lab study (N=64) we found that bystanders are able to successfully identify both task switches (83%) and tasks (77%) within only a few seconds of the task switch. Furthermore, we identified interruption strategies of bystanders. From our results we derive implications meant to support designers and practitioners in building HMD applications that are used in a co-located collaborative setting.
Technology use in India is highly gendered across diverse socioeconomic backgrounds, and women have only recently come to widely adopt smartphones, mobile internet, and social media---even in urban India. We present an in-depth qualitative investigation of the appropriation of social computing technologies by women from urban, middle-income households in New Delhi and Bangalore, India. Our findings highlight the additional burden that these women must contend with, on account of gender, as they engage on social media. We discuss these findings to make three contributions. First, we extend conversations on gender in Human-Computer Interaction (HCI) by discussing how design in patriarchal contexts might be rooted in existing efforts towards change and appropriation. Second, we expand understandings of privacy in HCI as being situated in the relationship between the individual and the collective. Third, we discuss how looking at our participants' social media use across multiple platforms leads to greater insight into the link between social media engagement and privacy.
This paper presents a design-led qualitative study investigating the (mis)use of digital technologies as tools for stalking, threats, and harassment within the context of intimate partner abuse (IPA). Results from interviews and domestic abuse forum data are reported on and set the foundation for a series of codesign workshops. The workshops invite participants to creatively anticipate smart home attack vectors, based on their lived experiences of IPA. Three workshops with seven IPA survivors and eleven professional support workers are detailed in this paper. Findings are organised into three phases through which survivors' privacy and security needs can be understood: 1) initial purchasing and configuring of smart home devices; 2) daily usage and; 3) (re-)securing devices after abuse has been identified. The speculative attack-vectors and design ideas generated by participants expose, for the first time, survivors' understanding of smart home security and privacy, as well as their needs, concerns, and requirements.
In this pictorial, we explore how emergent menstrual biosensing technologies compound existing concerns for the everyday ethics of extracting and analyzing intimate data. Specifically, we review the data practices of a set of existing menstrual tracking applications and use that analysis to inform the design of speculative near future technologies. We present these technologies here in the form of a product catalog for a fictional company called Vivewell. Through this work, we contribute both a set of speculative design proposals and a case study of a design project that begins with the analysis of existing data policies.
Medical devices are moving out of the clinic and into the home. The design of these devices shapes our experience of interacting with our bodies. We attend to ovulation tracking devices that aid conception. We present Ovum; a research product that will be deployed in a long-term, qualitative study with couples trying to conceive. The contributions of this pictorial are the framing of the design space around at-home ovulation tracking devices and the presentation of our approach to this design space through working with oppositional experiential qualities by designing for fertility tracking as a shared, domestic and do-it-yourself experience.
In North America, people phone the number 9-1-1 to obtain emergency services. In the near future, such services will incorporate new communication modalities such as video calling where callers can show visuals of the emergency to call takers. This information can then be shared between dispatchers and first responders such as firefighters. We conducted an exploratory study with dispatchers and firefighters to understand how 9-1-1 video call information should be shared with firefighters while enroute to an emergency and what benefits and challenges it would create. Our results show that video call information can help firefighters gain more accurate information about an emergency, provide location specifics, pre-plan strategies, and mentally prepare for the situation while traveling to it. Yet there are design tensions around what and how much information should be shared with firefighters by dispatchers, and, in turn, what video information is shown to firefighter crewmembers
The impact of computing devices on the nature of work has been a long-standing topic of inquiry. Removing the boundaries of traditional corporate organizations, mobile IoT has enabled a technology driven future, taking trans-formative technology off the desk and placing it in the field. The exponential increase in mobility and reduction in cost have expanded accessibility of computing technologies to whole new categories of work including emergency re-sponse. As new kinds of workplaces adopt and adapt to computing, we want to better understand how these tech-nologies impact the organization and change the types of work people do. In order to answer those questions, we present a qualitative investigation examining the implemen-tation of a wearable device into two fire departments in the Southeastern United States. Our analysis demonstrates how the particulars of these kinds of workplaces and organiza-tions will shape how we design new digital technologies for the next generation workforce.
This paper presents an approach ensuring mutual awareness among airliner pilots using touch-based interactions. Indeed, touchscreens are making their way into cockpits, but touchbased gestures are less performative than gestures on physical controls, and they are limited in aircraft for efficiency and safety reasons. To support a safer perception by the other pilot, we propose to supplement the perception of performed gestures with graphical representations. Our hypothesis is that representing the effect of gestures is more relevant than representing gestures themselves. We introduce our design choices to build the representations for mutual awareness based on an analysis of the activity and graphical semiology. We report results gathered from walkthroughs of the designs with airliner pilots. These results confirm that representing the effects of gestures is an efficient means for mutual awareness. Our work shows how pilots understand the effect of gesture both as a result and as an impression.
This pictorial proposes an alternative mode of interaction between nurses and clinical alarm syste ms and shows how three concepts, developed to interact through the end user's periphery of attention, can be applied in a clinical setting to improve the workflow of nurses and wellbeing of patients.
Being as a robotic companion is an extensive application of on-body robots; yet, as an emerging type of robots, few previous works focus on the design of on-body companion robots from the users' perspective, remaining users' expectations towards this type of robots unclear. To assist designers in the design process of on-body companion robots, we surveyed users' expectations towards on-body companion robots (n=215) by a questionnaire constituting of questions on factors that may affect robot acceptance, including robot functionality, robot appearance, and robot social ability. Based on the survey results, we stated design guidelines for the design of on-body companion robots supporting designers with insights into users. To demonstrate how to design on-body companion robots based on our findings, we organized a workshop with experienced designers to develop a conceptual on-body companion robot, and they proposed Bubo, an example prototype of on-body companion robot.
Interactions with multiple conversational agents and social robots are becoming increasingly common. This raises new design challenges: Should agents and robots be modeled after humans, presenting their entity (i.e., social presence) as bound to a single body, or should they take advantage of non-human capabilities, such as moving their social presence from body to body across service touchpoints and contexts? We conducted a User Enactments study in which participants interacted with agents that had one social presence per body, that could re-embody (move their social presence from body to body), and that could co-embody (move their social presence into a body that already contains another). Reactions showed that participants felt comfortable with re-embodying agents, who created more seamless and efficient experiences. Yet situations that required expertise or concentration raised concerns about non-human behaviors. We report on our insights regarding collaboration and coordination with several agents in multi-step interactions.
We describe a physical interactive system for human-robot collaborative design (HRCD) consisting of a tangible user interface (TUI) and a robotic arm that simultaneously manipulates the TUI with the human designer. In an observational study of 12 participants exploring a complex design problem together with the robot, we find that human designers have to negotiate both the physical and the creative space with the machine. They also often ascribe social meaning to the robot's pragmatic behaviors. Based on these findings, we propose four considerations for future HRCD systems: managing the shared workspace, communicating preferences about design goals, respecting different design styles, and taking into account the social meaning of design acts.
The inevitable increase in real-world robot applications will, consequently, lead to more opportunities for robots to have observable failures. Although previous work has explored interaction during robot failure and discussed hypothetical danger, little is known about human reactions to actual robot behaviors involving property damage or bodily harm. An additional, largely unexplored complication is the possible influence of social characteristics in robot design. In this work, we sought to explore these issues through an in-person study with a real robot capable of inducing perceived property damage and personal harm. Participants observed a robot packing groceries and had opportunities to react to and assist the robot in multiple failure cases. Prior exposure to damage and threat failures decreased assistance rates from approximately 81% to 60%, with variations due to robot facial expressions and other factors. Qualitative data was then analyzed to identify interaction design needs and opportunities for failing robots.
As the number of devices integrated into our lives skyrockets, we find ourselves inundated with opportunities for interaction. IoT promises to distribute computation and communication so pervasively that it becomes like background music for daily life, but the instruments often intrude in ways that subtract from an assistive, harmonious environment. We conducted anticipatory ethnography to enable people to imagine harmonious ecosystems that integrate speculative objects and their personal devices. Through activities we explored how participants might interact with future ecosystems within their daily routines. In this paper, we present a framework for embodied state as a central dimension from which to connect human motion and cognitive circumstance within the context of space and time. We find the need for substantial specialization in device ecosystems that bridge ambient and interactive spaces, and we present the notion of cognitive onloading for symmetrically explaining information distribution.
We present Sensorstation, a research product to explore the effect of smart sensors and services on the communal life within a shared apartment. Sensorstation utilizes wireless sensors and a shared output device displaying a steady data stream of sensor based notifications. It was deployed on the kitchen table in a shared apartment for 19 days to enable communal residents to co-design and to co-speculate on smart sensors and services in the context of their shared apartment. We synthesize and interpret findings to illustrate how residents created positive connections between each other, while simultaneously exercising self-monitoring, control over others, and contemplating reward systems and penalties. Our work contributes to a nuanced understanding of smart technology for shared apartments. We argue that design has an obligation to consider smart technology that acknowledges boundaries and to provide negotiation spaces to configure agency.
We present Bespoke Booklets: a design research method utilizing booklets of situated, imaginary, and personalized conceptual sketches to co-speculatively envision alternative futures (in our case for domestic Internet of Things). The Bespoke Booklets create a space where designers and participants can co-imagine alternative futures while also engaging each other at the level of embodied experiences. After refining our method, we discovered it had many qualities previously championed by feminist HCI and STS theorists. To this end, we draw out, analyze, and critique our method using feminist concepts as a lens to emphasize four specific qualities: collaborative, post-functional, situated, and partial. We found that the booklets, as material artifacts, were a productive tool to generate a physical record of our co-speculation and a fruitful catalyst for research that reflects feminist theory, offering an example of how it can be used as a heuristic for design methodologies.
Astral is a prototyping tool for authoring mobile and smart object interactive behaviours. It mirrors selected display contents of desktop applications onto mobile devices (smartphones and smartwatches), and streams/remaps mobile sensor data to desktop input events (mouse or keyboard) to manipulate selected desktop contents. This allows designers to use familiar desktop applications (e.g. PowerPoint, AfterEffects) to prototype rich interactive behaviours. Astral combines and integrates display mirroring, sensor streaming and input remapping, where designers can exploit familiar desktop applications to prototype, explore and fine-tune dynamic interactive behaviours. With Astral, designers can visually author rules to test real-time behaviours while interactions take place, as well as after the interaction has occurred. We demonstrate Astral's applicability, workflow and expressiveness within the interaction design process through both new examples and replication of prior approaches that illustrate how various familiar desktop applications are leveraged and repurposed.
Access to wheelchair skills training is important for the mobility and independence of wheelchair users, but training rates are low - particularly among young people. In this paper, we present Geometry Wheels, a movement-based experience prototype to explore the potential of interactive technology to support basic wheelchair skills training for manual wheelchair users, designed with the support of occupational therapists. Results of an evaluation with 15 participants (10 young wheelchair users and 5 parents) show that interactive systems can deliver engaging and challenging activities that encourage wheelchair navigation and activity. However, the project also revealed challenges in designing for individual differences in physical abilities, in conflicts between children's and parents' perceptions of ability, and barriers to home use. We outline strategies for the design of rehabilitative technology to help young people with disabilities build physical abilities.
We describe the development and implementation of a 7-month-long project which used a series of creative workshops designed in collaboration with a cultural institution and conducted with children to draw influences from magical realist literature into the development AR applications. The project culminated in the release of an AR app for Android and iOS platforms, Magical Reality. After describing the design and implementation of the research we discuss its findings as they support the two facets of our contribution to DIS: First, we assess our attempts to apply inspiration, derived from workshopping ideas from magical realist literature with children to the design of AR experiences, making recommendations for future design practice seeking to include comparable influences. Second, we consider the degree to which our workshops were successful in combining specialist knowledges from across the different departments of a cultural organization to answer sectoral challenges and describe both advantages and challenges for future collaborative work.
In this paper we report on research exploring the privacy, security and safety implications of children being able to program Internet of Things devices. We present our methodology for understanding the contexts in which children may wish to use programmable IoT, identifying risks that emerge in such contexts, and creating a set of questions that might guide design of such technologies so that they are safe for child users. We evaluate the success of the methodology, discuss the limitations of the approach, and describe future work.
Inspired by HCI research advocating for the inclusion of children in the design process, this pictorial provides a qualitative case study on children's perceptions of urban landscapes. Our goal is to create digital maps in the context of locative systems and wayfinding for children. For this purpose, we engaged 70 students from the city of Funchal (Portugal) in the drawing of a cognitive map of their journey from home to school in the Fall of 2017. These children (9-12 years old) also replied to a brief survey, and 31 out of 70 responded to a face-to-face interview in the Spring of 2018. This pictorial offers an analysis of the drawings as well as providing highlights of the children's own account of their maps. Our work generates a set of 10 themes related to landmarks and design ideas for the creation of digital maps for children.
FamilySong aims to connect internationally distributed family groups via synchronized music-listening. It fosters feelings of togetherness and mutual belonging, that is, connection and culture. We conceptualize it as a domestic Media Space with no live audio or video. It emphasizes intimacy without intrusion, and lives within an existing ecology of interactive technologies. The design journey includes both autobiographical and research-through-design components with similarly-structured family groups (very young children with parents located in the United States and grandparents in Ecuador). Parents, children and grandparents all participated both in the moment and in subsequent interaction grounded in the FamilySong experience. Grandparents took the lead in expressing the importance of the values and goals they saw embedded in the system. This work presents a design that illuminates what it means to connect like a family in which values, needs and priorities are interdependent, and joy and delight are important.
With the massive proliferation of digital photos, new approaches are needed to enable people to engage with their vast photo archives over time. We describe the Research through Design process of Chronoscope, a domestic technology that leverages temporal metadata embedded in digital photos as a resource to encourage more temporally diverse, rich, and open-ended experiences when re-visiting one's personal digital photo archive. We unpack and reflect on design choices that made use of digital photo metadata to support new ways of interacting with personal photo archives through and across time. We conclude with opportunities for future HCI research and practice.
Sound zone technology is being developed to provide users with the ability to modify their personal soundscape. In this paper, we take first steps toward studying how and when users could use sound zone technology within the domestic context. We present a design ethnographical study of sounds in homes and potentials for utilising sound zone technology to modify soundscapes. Based on two rounds of qualitative interviews with seven participating households of diverse composition, dwelling type, and area type, we develop a design-oriented framework. The framework posits particular situations in which sound zone technology can support domestic activities. These are described and validated through the qualitative data collected in the households. The framework consists of two dimensions leading to four generalised situations: private versus social situations, and separate versus connected situations. A number of implications for designing interaction with sound zone systems in homes are derived from the framework.
The home is a rich context for design research to study things and the Everyday. However, home is also a place of utmost privacy for most people. To better understand this context through an observational artifact without impacting privacy, we designed the Peekaboo cam that enables inhabitants to control their data release actively or passively. The Peekaboo cam is an observational research camera with a coverable lens. We validate in a field study in two homes for 14 days. The resulting photo streams provide qualitative insights on Everyday things in transition. We suggest four design guidelines for observational artifacts for home ecologies.
In the past decades the field of bioimaging has experienced an explosion of technologies that have revolutionized the study of the microscopic world. Through iterative prototyping and evaluation with museum visitors, we shed light on how these technologies, specifically image recognition techniques, can be incorporated into an interactive museum exhibit to help the visiting public gain insights and make sense of dynamic, live specimens as they use a research-grade microscope. Our work indicates that the technology needs to be carefully positioned as an aid to observation rather than an infallible source of expertise in order to support and sustain visitors' explorations. This paper contributes to our understanding of design considerations in integrating emerging imaging technologies and techniques into museum exhibits to enrich visitors' interactive learning experiences.
Computational thinking learning tools such as Scratch support forms of expressive making that can aid in reflection and understanding complex scientific concepts and systems. However, little research has explored how such computational tools might support forms of emotional literacy such as developing an understanding of emotions. Through an exploratory design with 11 participants, Scratch was used to create models representing participants' emotion knowledge. The overarching research question focused on how might computational tools aid in supporting reflection on emotion knowledge. Analysis of artifacts generated through sketching and Scratch, as well as transcribed design discussions were used as points of data analysis. Drawing on theories of computational thinking and emotional literacy, we present an analysis that highlights the potential for such tools to support certain reflective practices around emotion knowledge.
Through the integration of technology-enhanced learning (TEL) in the classrooms, there is an increase in Virtual Learning Environment-supported classes in secondary schools, which brings unintentional complexities in terms of monitoring for teachers [25]. To support secondary school teachers during VLE-supported lessons, a peripheral data visualisation system was designed and implemented in a three-week field study. Both qualitative and quantitative data were gathered and analysed through methodological triangulation in order to get an in-depth understanding about the use of the system by teachers. The key findings from our study were that the peripheral data visualisation tool, by being a distributed, highly visible system, was well integrated in the teachers' practice. The peripheral visualisation served as a trigger for teacher interventions where the teacher could confront the student's level of concentration and provide support when a student needs it. Furthermore, by offloading the secondary tasks of checking the students' level of concentration and progress to the visualisation, most teachers experienced more peace of mind and space to manage their primary teaching practice. Lastly, approximately 95% of 89 students experienced the data visualisation as neutral or motivating, while 5.7% of the students experienced violation of privacy by this medium.
Interaction design is playing an increasingly prominent role in computing research, while professional user experience roles expand. These forces drive the demand for more de- sign instruction in HCI classrooms. In this paper, we distill the popular approaches to teaching design to undergraduate and graduate students of HCI. Through a review of existing research on design pedagogy, an international survey of 61 HCI educators, and an analysis of popular textbooks, we ex- plore the prominent disciplinary perspectives that shape design education in the HCI classroom. We draw on our analyses to discuss the differences we see in forms of design taught, approaches to adapting design instruction in computing-based courses, and the tensions faced by instructors of these classes. We conclude by arguing for the importance of pedagogical research on design instruction as a vital and foundational area of inquiry in Interaction Design and HCI.
Deformable interfaces are emerging in HCI and prototypes show potential for non-rigid interactions. Previous reviews looked at deformation as a material property of shape-changing interfaces and concentrated on output. As such,deformable input was under-discussed. We distinguish deformable from shape-changing interfaces to concentrate on input. We survey 131 papers on deformable interfaces and review their key design elements (e.g., shape, material) based on how they support input. Our survey shows that deformable input was often used to augment or replace rigid input, particularly on elastic and flexible displays. However, when shapes and materials guide interactions, deformable input was used to explore new HCI paradigms, where gestures are potentially endless, and input become analogy to sculpting, metaphor to non-verbal communication, and expressive controls are enhanced. Our review provides designers and practitioners with a baseline for designing deformable interfaces and input methodically. We conclude by highlighting under-explored areas and identify research goals to tackle in future work with deformable interfaces.
Hybrid practices are emerging that integrate creative materials like paint, clay, and cloth with intangible immaterials like computation, electricity, and heat. This work aims to expand the design potential of immaterial elements by transforming them into manipulatable, observable and intuitive materials. We explore one such immaterial, electric heat, and develop a maker-friendly fabrication pipeline and crafting support tool that allows users to experientially compose resistive heaters that generate heat spatially and temporally. These heaters are then used to couple heat and thermoreactive materials in a class of artifacts we term Thermoreactive Composites (TrCs). In a formal user study, we observe how designing fabrication workflows along dimensions of composability and perceivability better matches the working styles of material practitioners without domain knowledge of electronics. Through exemplar artifacts, we demonstrate the potential of heat as a creative material and discuss implications for immaterials used within creative practices.
This pictorial presents our material-driven inquiry into carbon-coated paper and kirigami structures. We investigated two variations of this paper and their affordances for tangible interaction; particularly their electrical, haptic, and visual aspects when shaped into three-dimensional forms through cutting, folding, and bending. Through this exploration, we uncovered distinct affordances between the two paper types for sensing folds and bends, due to differences in their material compositions. From these insights, we propose three applications that showcase the possibilities of this material for tangible interaction design. In addition, we leverage the pictorial format to expose working design schematics for others to take up their own explorations.
Kirigami Actuators are two-dimensional patterns that allow the translation of a simple actuation in one dimension into a complex transformation in another. Kirigami Actuators represent one metamaterial strategy that designers of Shape-changing Interfaces could utilize to minimize the size and complexity costs of actuation. Metamaterials yield great promise for HCI and Shape-changing Interfaces in particular. In an effort to reveal the promise of Kirigami Actuators for the design of Shape-changing Interfaces, this pictorial presents several design tactics for a specific Kirigami Actuator: The Lift pattern. These tactices outline how different components of the pattern can be changed to strategically alter the transformational qualities of the pattern. Initial concept sketches are presented as inspiration for future work.
We investigate the design of a shape-changing dial, i.e. a dial that can change its circumference and height to adapt to different contexts of interaction. We first explore how users grasp 3D printed dials of different heights and circumferences in order to inform the form factor of shape-changing dials. We then design a prototype, ExpanDial, inspired from morphing origami. We then use our prototype as a probe within design sessions and use the participants' feedback to devise a set of applications that can benefit from such reconfigurable devices. We also used the design sessions to better understand what kind of interaction and manipulation could be harnessed from such device.
Despite the advantages of tangible interaction, physical controls like knobs seem to be disappearing from a wide range of products in our everyday life. The work presented in this paper explores how physical controls can become dynamic, in terms of both shape, and haptic force feedback. The paper contains two strands of work: First, we present a study that explores the relationship between haptic force feedback and different knob shapes, evaluating twelve distinct haptic stimuli in relation to six widely used knob shapes. Second, based on the insights collected in the study, we present the design of DynaKnob, a shape changing knob that can change between four different knob shapes. DynaKnob illustrates how dynamic content control, can be combined with dynamic shape and force feedback. Both the study and the design of DynaKnob contribute to understanding how adaptive physical interface controls could be designed in the future.
We introduce MorphIO, entirely soft sensing and actuation modules for programming by demonstration of soft robots and shape-changing interfaces.MorphIO's hardware consists of a soft pneumatic actuator containing a conductive sponge sensor.This allows both input and output of three-dimensional deformation of a soft material.Leveraging this capability, MorphIO enables a user to record and later playback physical motion of programmable shape-changing materials.In addition, the modular design of MorphIO's unit allows the user to construct various shapes and topologies through magnetic connection.We demonstrate several application scenarios, including tangible character animation, locomotion experiment of a soft robot, and prototyping tools for animated soft objects.Our user study with six participants confirms the benefits of MorphIO, as compared to the existing programming paradigm.
Traditional crafting methods such as stitching, embroidering, dyeing and machine sewing can be enhanced to create novel techniques for embedding shape-changing and colour-changing actuation into soft fabrics. In this paper, we show how embedding Shape-Memory Alloy (SMA) wire, copper wire and thermochromic thread into needles and bobbins, we were able to successfully machine sew interactive morphological capabilities into textiles. We describe the results of extensive design experiments, which detail how differing actuations can be achieved through a matrix of parameters that directly influence a fabric's deformational behaviours. To demonstrate the usefulness of our 10 techniques, we then introduce and discuss an interactive artefact we produced, using a subset of these techniques. We contribute such new techniques for creating soft-interfaces, imbued with actuation through tactile and self-morphing capabilities without motors or LEDs. We draw insights from this on the potential of the proposed techniques for crafting interactive artefacts.
Wearables are integrated into many aspects of our lives, yet, we still need further guidance to develop devices that truly enhance in-person interactions, rather than detract from them by taking people's attention away from the moment and one another. The value of this paper is twofold: first, we present an annotated portfolio of 'social wearables', namely technology designs worn on the body that augment co-located interaction. The design work described can serve as inspiration for others. Then we propose a design framework for social wearables grounded in prior work, as well as our own design research, that can help designers to ideate by raising valuable questions to begin their inquiry with and use to evaluate their designs. We illustrate the evaluative value of this framework through two social wearable designs, each tested in the appropriate social setting.
Designing technology to support instructed physical training is challenging, due to how instructions rely on complex interactional and situational social processes. To support in-the-moment instruction, we engaged in a co-creative Research through Design process with a Yoga instructor. Together, we designed and deployed Enlightened Yoga: a training class featuring wearable projecting lights that augment the instructor's and trainee's movements, and highlight the orientation and positioning of key body parts. We present insights from the design process and a study of the class. We show how the wearable lights enabled a new shared frame of reference between instructor and trainees, that became instructable through the way participants could reference and orient themselves to it. This allowed the instructor to extend his instructional strategies, and enabled trainees to better act upon cues. We discuss how this was made possible by jointly designing the technology, its coupling with the body, instructions and exercises.
As our landscape of wearable technologies proliferates, we find more devices situated on our heads. However, many challenges hinder them from widespread adoption - from their awkward, bulky form factor (today's AR and VR goggles) to their socially stigmatized designs (Google Glass) and a lack of a well-developed head-based interaction design language. In this paper, we explore a socially acceptable, large, head-worn interactive wearable - a hat. We report results from a gesture elicitation study with 17 participants, extract a taxonomy of gestures, and define a set of design concerns for interactive hats. Through this lens, we detail the design and fabrication of three hat prototypes capable of sensing touch, head movements, and gestures, and including ambient displays of several types. Finally, we report an evaluation of our hat prototype and insights to inform the design of future hat technologies.
Schön describes the way a designer engages with their materials as a "conversation". In clothing design this typically involves tangible and situated actions such as draping, ripping, and cutting-actions that evoke responses from the fabric at hand. Dynamic fabrics-surface-changing fabrics that combine digital and physical states-are still novel fashion-design materials. When working with the digital, intangible qualities of these fabrics, how does a dialogue unfold for designers accustomed to working physically with fabrics? In this paper we examine the design process of Phem, a collection of garments that use dynamic fabrics that function similarly to augmented reality. We reflect upon the improvisations required to satisfy a productive dialogue with the digital forms of these materials. We conclude with a discussion that proposes revisiting Schön's notion of a conversation in the context of digital forms, and use Ingold's perspectives on making to inform this inquiry.
We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies.
To promote safe and effective human-computer interactions, researchers have begun studying mechanisms for "trust repair" in response to automated system errors. The extent to which users distinguish between a system and the system's developers may be an important factor in the efficacy of trust repair messages. To investigate this, we conducted a 2 (reliability) x 3 (blame) between-group, factorial study. Participants interacted with a high or low reliability automated system that attributed blame for errors internally ("I was not able..."), pseudo-externally ("The developers were not able..."), or externally ("A third-party algorithm that I used was not able..."). We found that pseudo-external blame and internal blame influenced subjective trust differently, suggesting that the system and its developers represent distinct trustees. We discuss the implications of our findings for the design and study of human-automation trust repair.
This paper describes ObjectResponder --- a tool that allows designers to use Artificial Intelligence (AI) to rapidly prototype concepts for context-aware intelligent interaction in the wild. To our knowledge, there are currently no available tools for designing and prototyping with AI within the actual context of use. Our application uses Google Cloud Vision to allow designers assigning chat bot-like responses to objects recognized by the smart-phone camera. This enables designers to use object recognition labels as a means to diverge on possible interpretations of the context and start generating ideas that can then be immediately tested and iterated. Initial results suggest that looking at the world from the perspective of the AI may enable designers to balance human and nonhuman biases, enrich a designer's understanding of the context, and open up unexpected directions for idea generation.
A goal of interactive machine learning (IML) is to enable people with no machine learning knowledge to intuitively teach intelligent agents how to perform tasks. This study investigates how three factors of the design of an interactive reinforcement learning agent - generalization through time, immediacy, and a time delay - impact the user's experience with the agent. We conducted a human-subject experiment in which people trained four agents with different interaction designs to play a simple game using verbal instruction. All agent variations were modified versions of the Newtonian Action Advice algorithm, an interactive reinforcement learning agent that learns from verbal advice like, "go left.'' The results show that both a time delay and probabilistic interface created poor user experiences. This is particularly important for IML designers, because the current algorithms almost universally are probabilistic and do not immediately respond to the human's input.
Relative pointing through using tactile mobile device (such as tablets of phones) on a large display is a viable interaction technique (that we call Pad in this paper) which permits accurate pointing. However, limited device size has consequences on interaction. Such systems are known to often require clutching, which degrades performances. We present E-Pad, an indirect relative pointing interaction technique which takes benefit of the mobile tactile surface combined with its surrounding space. A user can perform continuous relative pointing starting on the pad then continuing in the free space around the pad, within arm's reach. As a first step toward E-Pad, we first introduce extended continuous relative pointing gestures and conduct a preliminary study to determine how people move their hand around the mobile device. We then conduct an experiment that compares the performance of E-Pad and Pad. Our findings indicate that E-Pad is faster than Pad and decreases the number of clutches without compromising accuracy. Our findings also suggest an overwhelming preference for E-Pad.
"What design innovations can the ubiquity and features of mobile devices bring to the programming realm?" has been a long-standing topic of interest within the human-computer interaction community. Yet, the important design considerations for using mobile devices in programming contexts have not been analyzed systematically. Towards this goal, we review a sample of the existing research work on this topic and present a design space covering (i) the target contexts for the designed programming tools, (ii) the types of programming functionality supported, and (iii) the key design decisions made to support the programming functionality on the mobile devices. We also review the design processes in the existing work and discuss objectives for future research with respect to (i) the trade-offs in enabling programming support given the constraints of mobile devices and (ii) applying human-centered methods particularly in the design and evaluation of programming tools on mobile devices.
We introduce Monomizo, a tangible daily and monthly calendar in a concrete-casted desktop object that allows users to print out their daily schedules. Monomizo was designed to integrate the benefits of digital and paper-based scheduling by providing easy conversion of schedules in digital devices to analog media (e.g., paper). To investigate users' schedule management experience with Monomizo, we conducted an in-field study of 10 participants over 6 days. The results showed that Monomizo helped users to review and reflect on the day via the screen and through attaching, writing, and physically possessing printed schedules. We also found value in encountering digital information through analog-metaphoric design that provides ambient permeation of schedules. Based on the findings, our study offers new possibilities for designing a tangible artifact that facilitates users' effective planning and use of their days.
We present two proof of concept experiences for a virtual reality (VR) game that draws on several medium-specific qualities of mobile, location-based, and tangible storytelling. In contemporary smartphone-VR, experiences are limited by short playtimes, limited interactions, and limited movement within a physical space. To address these limitations, we suggest a reconceptualization of smartphone-VR. Rather than design that deems the smartphone the least capable VR platform, we propose design that adds VR to an already rich mobile storytelling platform. We argue that by drawing on otherwise separate storytelling media, designers can circumvent limitations related to smartphone-VR while also extending the range of smartphone-based storytelling. We conclude by reflecting on possible implications of this extended design space.
There is growing interest in Digital Civics regarding how trust can inform the design of technology in the civic space. Previous work has identified several key elements of trust in digital civics to consider in design such as the connection between trust and distance, trust as sociotechnical, trust as process and trust design affordances. However, these elements have not yet been applied to design. We address this gap by engaging these elements in ongoing design research within Atlanta's Department of Immigrant Affairs. Our research inquiry with the department centered on developing a design intervention to improve the department's community engagement work. We developed a design intervention with the department through co-design which was then assessed and advanced using the elements of trust. By reflecting on this design process informed through the elements, we unify the elements of trust in digital civics towards a design framework.
Newly emerging urban IoT infrastructures are enabling novel ways of sensing how urban spaces are being used. However, the data produced by these systems are largely context-agnostic, making it difficult to discern what patterns and anomalies in the data mean. We propose a hybrid data approach that combines the quantitative data collected from an urban IoT sensing infrastructure with qualitative data contributed by people answering specific kinds of questions in situ. We developed a public installation, Roam-io, to entice and encourage the public to walk-up and answer questions to suggest what the data might represent and enrich it with subjective observations. The findings from an in the wild study on the island of Madeira showed that many passers-by stopped and interacted with Roam-io and attempted to make sense of the data and contribute in situ observations.
Advancements in digital civics have enabled leaders to engage and gather input from a broader spectrum of the public. However, less is known about the analysis process around community input and the challenges faced by civic leaders as engagement practices scale up. To understand these challenges, we conducted 21 interviews with leaders on civic-oriented projects. We found that at a small-scale, civic leaders manage to facilitate sensemaking through collaborative or individual approaches. However, as civic leaders scale engagement practices to account for more diverse perspectives, making sense of the large quantity of qualitative data becomes a challenge. Civic leaders could benefit from training in qualitative data analysis and simple, scalable collaborative analysis tools that would help the community form a shared understanding. Drawing from these insights, we discuss opportunities for designing tools that could improve civic leaders' ability to utilize and reflect public input in decisions.
In this paper we investigate human interactions with urban media installations by adopting two scales of analysis: the body scale (micro) and the city scale (macro). This twofold approach allows us to better understand the relationships between the design properties of outdoor installations and the urban spatial layout around them. We conducted in-the-wild studies of two urban media installations, one consisting of fixed components, and the other of movable components, which were deployed in different places and encouraged different types of whole-body interaction. We provide a detailed account of the micro and macro levels of interactions, based on observational and qualitative explorations. Our studies reveal that the urban spatial layout is a key element in defining the interactions and encounters around outdoor interfaces, and therefore it needs to inform the design process from the outset.
I describe the research and creation journey of a choreographic dance piece called SKIN that I made with another choreographer, 3 dancers, 1 musician and 1 developer. The performance integrates interactive technologies mapping inner movement to sound and video on stage. We followed a research though practice method that includes iterative cycles of choreographic practice and interaction design. This generated a set of research questions that I address through experience explicitation interviews of both audience and creative team members. The interviews allow me to investigate the lived experience of making and attending the performance and the emergent relationships between dance, media and interaction as well as the tensions and negotiations that emerged from integrating technology in art. I discuss my approach as anti-solutionist and argue for more openness in HCI to allow artists to contribute to knowledge by embracing the messiness of their practice.
The management of bodily excretion is an everyday biological function necessary for our physiological and psychological well-being. In this paper, I investigate interaction design opportunities for and implications of leveraging intimate and somatic data to manage urination. This is done by detailing a design space that includes (1) a critique of market exemplars, (2) three conceptual design provocations, and (3) autobiographical data-gathering and labeling from excretion routines. To conclude, considerations within the labeling of somatic data, the actuating of bodily experiences, and the scaling of intimate interactions are contributed for designers who develop data-driven technology for intimate and somatic settings.
Physical training can be frustrating and hard, especially for those who experience additional challenges to access and control their proprioceptive senses. In the context of designing for children with Sensory-based Motor Disorder, we designed and deployed a series of Training Technology Probes to be used in circus training. Here we focus on how these were used, tested, and appropriated by children and instructors during a six-week circus training course. Through these explorations, we identified a range of potential benefits from using their functions in training. We present the Physical Training Technology Probes and the benefits they brought to training. We show how the technology functions helped children focus and provided feedback related to posture and balance. Furthermore, their open-ended designs and versatile options for use were crucial in exploring their contributions to training, and in how they helped foster creative engagement with technology and training. Our work contributes towards understanding the specific requirements when designing for the target group, and more generally contributes with design strategies for technology support for skill training.
We devised a Soma Design curriculum with accompanying teaching approaches for a seven-week course at a technical university. In our analysis of students' design concepts and process accounts, we found that they had opened an unusually rich and aesthetically engaging design space. But we also noted how they sometimes struggled with processes such as: "staying in the undecided" long enough to engage deeply with somaesthetic design imaginings; finding, refining and repeatedly returning to an aesthetic quality through the different phases of their design work; liberating them-selves from pre-existing practices or objects in order to find entirely novel design possibilities; shifting from a more rationalistic design process, mainly based on argumenta-tion, into a felt engagement were imaginations are acted out - not talked about; and lacking a technical toolkit for somaesthetically experiencing interactive sociodigital materials. Based on these insights, we provide a set of recommendations for how to teach Soma Design, and we have created an accompanying Soma Design toolkit that will support exploration and design.
Researchers are increasingly exploring interactive technology supporting human-system partnership in an exertion context, such as cycling. So far, most investigations have supported the rider cognitively, by the system "sensing and presenting" information to assist the rider to make informed decisions. In contrast, we propose systems that promote user-system co-operation, by "sensing and acting" on information to assist the rider, not only "cognitively" but also "physically", with the aim of facilitating user-system co-operation in an exertion context. Our prototype, "Ari", is a novel augmented eBike designed to facilitate user-system co-operation, where the information that each party can sense is used in regulating the speed to cross all traffic lights on green. A study with 20 bike riders resulted in five themes and six design tactics to further the design of interactive systems at the intersection of human-computer integration in an exertion context, thereby facilitating user-system co-operation to augment the exertion experience.
User involvement is well established in game and play design. But in a time when play design is becoming relevant in domains beyond pure entertainment, and play blends into everyday activity in diverse ways, we need to revisit old, and develop new, user involvement methods. Using a situated perspective and Research through Design, we present Situated Play Design (SPD), a novel approach for the design of playful interventions aimed at open-ended, everyday activities that are non-entertainment based. Like user-centered game and play design methods, our contribution leverages user engagement; like Participatory Design methods, our method acknowledges the co-creating role of end users. SPD extends those approaches by focusing on uncovering existing manifestations of contextual playful engagement and using them as design material. Through two case studies, we illustrate our approach and the design value of using existing and emergent playful interactions of users in context as inspirations for future designs. This allows us to provide actionable strategies to design for in-context playful engagement.
Taste offers unexplored opportunities for novel user experiences in HCI, however it is difficult to design for. While most lab research has shown basic tastes are consistently associated with positive or negative emotional experiences, the value of these mappings for user experience is less explored. In this paper we leverage 3D food printing technologies to report an experimental study investigating the relationship between taste and emotional experience for use in HCI. We present four real-life inspired scenarios: product rating, sports match results, experiential vignettes, and website usability, to explore the understanding and expression of emotional meaning through tastes. Our findings extend previous emotion mappings for sweet and bitter tastes to applied scenarios. We also draw out fresh insights into the role of taste, flavor, and embodiment in experience design, reflecting on the role of 3D food printing in supporting taste interfaces.
This pictorial reports on the Play Poles prototype that was designed as part of a design ethnography investigating the Internet of Things (IoT) as a resource supporting outdoor play amongst groups of children. We use illustrations and annotations derived from video data and analysis to depict gestures, actions and social interaction that are significant in understanding the qualities of the Poles as a play resource. We argue that simple functions and direct, real-time control can be used by groups of children to support fun and creativity in outdoor play, whilst also highlighting opportunities and challenges in designing IoT play resources.
This paper describes a series of collaborative studio explorations in examining waste. We assembled a design team to explore how designers might conceive, handle, and rework the material left behind as waste within an on-campus makerspace and adjacent design labs. We turned discarded 3D printing filament into a reparative glue for broken prints and dissolved cardboard boxes into a medium for pollinator habitats. We describe how attending to waste involves understanding the relationships that define it, like how a material comes to be categorized as biodegradable but is impossible to break down in practice. Bringing this insight to the design context, we introduce the tactic of ecological inversions, experiments in reversing material flows to expose the wider infrastructure on which they depend. We discuss how ecological inversions could invite design researchers to notice the infrastructural relationships that exceed the physical limitations of the makerspace, revealing challenges around complicity and legibility.
The wellbeing of people with dementia in long-term care facilities is hindered, as they spend most of their time alone with little engagement in meaningful activities and an absence of pleasant sensory stimulation. We designed an interactive system called LiveNature that adopts a novel combined approach involving an ambient display unit and an interactive robotic sheep, to offer long-term access and to engage people with dementia in long-term care facilities in rewarding experiences. LiveNature aims to provide holistic multi-sensory engagement to provoke positive emotions, increase social bonding, and restore attentiveness and communication. The design was implemented within a Dutch nursing home. An evaluation of the user experience and the effectiveness of the design was conducted in a real-life setting with nine participants, five family members, two caregivers and four volunteers, using observational rating scales and semi-structured interviews. The results of the rating scales and the findings from qualitative data showed evidence of enhanced positive engagement.
An increasing variety of technologies are being developed to support conservation of endangered wildlife; however, comparatively little attention has been devoted to their design. We undertook three years of ethnographic fieldwork and design research with the recovery team of an endangered Australian bird (the Eastern bristlebird) to explore the team's culture and practices, as well as their perspectives on including collection and analysis of environmental acoustic recordings into their conservation praxis. Through thematic analysis, we identified the team's collective goals, culture, conservation activities, and technology use. We found that acoustic technologies have promise for supporting conservation of furtive and vocal Eastern bristlebirds. Trialing acoustic technologies also revealed that the team had strong interest in their use. We identified knowledge gaps, time constraints, and technology aversion as barriers to be overcome with future interaction design research. We offer an initial set of practical guidelines for designing technologies to support conservation.
We will explore four plausible futures and experiment on a speculative approach to investigate "wicked problems" [14]. Grounded in the field of design fiction, this work combines insights from the climate impact research community, technology meta trends, and plant science, all of which work as a basis for the exploration of relationships between humans, plants, and technology. Based on scenarios from the climate impact research community, we prototyped four plausible worlds and visualized exemplary scenes in the daily life of the plausible citizens. Sketching and low-fidelity prototyping supported the process of gaining knowledge and iterating on storytelling.
In this paper, we describe how we used workshops as boundary objects that bridged the dual goals of data infrastructure literacy and design. We organized 11 workshops with community leaders from the Westside neighborhoods in Atlanta. The moments of breakdown that took place at the workshops allowed the participants to critically reflect on the socio-technical complexities of the data infrastructure, which scholars have argued is key to data literacy. Additionally, these moments of breakdown also offered us, as designers, insights we could use to reimagine data infrastructures. We contribute an ethnographic analysis of the workshops we organized, along with a preliminary set of data infrastructure literacy guidelines. In doing so, we invite the DIS community to take up such workshops as a way to continue to design data infrastructures, after design.
We report on a design research inquiry aimed at understanding and exploring the values, practices, and perspectives of people that actively embrace and choose to live within collective houses and mobile vehicles as their homes. A goal of our work is to inquire into how such lived alternatives of 'home' can take a step toward broadening possibilities for conceptualizing 'domestic' technology and provoking questions around how it might be critiqued, imagined, and designed. We offer a brief overview of our ongoing research with a sample of collective and mobile dwellers, and propose three themes that extend prior generative work in this area: critique through living, taking time to adapt, and the transitional home. We use these themes in a design-led approach to propose six fictional future technology concepts that aim to (i) critically reflect on and provoke questions about commitments in current mainstream visions of domestic technology and (ii) explore new possibilities for engaging with the material, social, and technological conditions shaping the lives of our collective and mobile dweller participants. We conclude with a reflection on our work, its limitations, and opportunities it suggests for future research and practice.
To gain historical perspective on the role of technical expertise in the labor movement, we explore the data-driven practices of mid-century American labor unionists who appropriated techniques from scientific management to advocate for workers. Analyzing the data artifacts and academic writings of the Management Engineering department of the International Ladies Garment Workers Union, we describe the rhetorical use of data within a mutual-gains model of participation. We draw insights from the challenges faced by the department, assess the feasibility of implementing these approaches in the present, and identify opportunities for the participatory design of workplace advocacy systems moving forward.
This pictorial engages with the concept of 'diversity' as it has been taken up in the current technology industry by companies such as Google, Facebook, and Twitter. Drawing on methods of design fiction, we use a visual parody of a fictional social media corporation to explore possible consequences of diversity efforts-both within industrial workplace settings and the platforms they produce. By outwardly foregrounding diversity through explicitly absurd presentations, our investigation uses humor and farce to draw attention to how such efforts further entrench the very forms of institutional racism, ableism, and sexism they are meant to disrupt; for example, by creating additional work for the people they are designed to include. We offer these visual explorations not to promote or discourage diversity efforts, but to help design researchers imagine what such efforts become when labor and unintended consequences go unaccounted.
Novel social, civic or entertainment opportunities might emerge when spatially distributed public displays become interlinked in meaningful ways. Yet little is still known about the effect of intrinsic design dimensions, such as how multiple displays should be spatially arranged, how their content should be linked, and how their locations and content should dynamically change over time. We therefore conducted a two-month long design study of a distributed public display system that invited passers-by to answer hyperlocal questions. By comparing the performance of different content and location arrangement strategies, we reveal distinct spatiotemporal user engagement patterns, and the specific local conditions that shaped them. We also discovered several contextual factors that inhibit more widescale engagement, among which the conceptual novelty, the apparent purpose, and the perceived cumulative effort to engage with several displays. Consequently, this study provides insights on how public displays can be linked to augment the effects of distribution.
Video communication systems have been suffered from the narrow field of view. To solve this limitation, one study proposed symmetric 360° video communication system by combining an omnidirectional camera and a hemispherical display. However, the system still had several issues, e.g., the invisibility of hemisphere which was at the opposite side from a user caused the inconvenience of observing the remote environment. To solve these issues, we introduce OmniGlobe, a novel symmetric full 360° video communication system which incorporates an omnidirectional camera, a full spherical display, and several visual or interactive techniques. Based on an experiment, we could indicate that our system is effective in reducing the inconvenience of observing the remote environment and increased the remote space awareness and user's gaze awareness to support remote collaboration. We also discuss the takeaways, limitations and application areas in our system which help improve the system.
The amount of information available on the web is too vast for individuals to be able to process it all. To cope with this issue, digital platforms started relying on algorithms to curate, filter and recommend content to their users. This problem has generally been envisioned from a technical perspective, as an optimization issue and has been mostly untouched by design considerations. Through 16 interviews with daily users of platforms, we analyze how curation algorithms influence their daily experience and the strategies they use to try to adapt them to their own needs. Based on these empirical findings, we propose a set of four speculative design alternatives to explore how we can integrate curation algorithms as part of the larger fabric of design on the web. By exploring interactions to counter the binary nature of curation algorithms, their uniqueness, their anti-historicity and their implicit data collection, we provide tools to bridge the current divide between curation algorithms and people.
Blockchain is a disruptive technology which has significantly challenged assumptions that underpin financial institutions, and has provoked innovation strategies that have the potential to change many aspects of the digital economy. However, because of its novelty and complexity, mental models of blockchain technology are difficult to acquire. Building on embodied cognition theories and material centered-design, we report an innovative approach for the design of BlocKit, a physical three-dimensional kit for materializing blockchain infrastructure and its key entities. Through an engagement with different materials such as clay, paper, or transparent containers we identified important properties of these entities and materialized them through physical artifacts. BlocKit was evaluated by 15 experienced bitcoin users with findings indicating its value for their high level of engagement in communicating about, and designing for blockchain infrastructure. Our study advances an innovative approach for the design of such kits, an initial vocabulary to talk about them, and design implications intended to inspire HCI researchers to engage in designing for infrastructures.
Today, most technology users have to deal with a growing amount of personal data across many devices and online platforms. There is a growing need for tools that can help people decide more intentionally what data to keep or discard. We created five design concepts in the form of video prototypes to probe on alternative design strategies for supporting users. Automation, aggressiveness, and temporality were key dimensions we explored. We conducted interviews with 16 participants using the concepts as a starting point for discussion. Participants had a range of reactions: some wanted to retain full control over keeping and discarding decisions, while others welcomed more automatic tools. We identify common ground in the need for a contextual and nuanced approach in design. We use these results to outline and reflect on possible future design directions for personalization, automation, new keeping or discarding actions, and privacy.
Increasing HCI work on affective interfaces aimed to capture and communicate users' emotions in order to support self-understanding. While most such interfaces employ traditional screen-based displays, more novel approaches have started to investigate smart materials and actuators-based prototypes. In this paper, we describe our exploration of smart materials and actuators leveraging their temporal qualities as well as common metaphors for real-time representation of changes in arousal through visual and haptic modalities. This exploration provided rationale for the design and implementation of six novel wrist-worn prototypes evaluated with 12 users who wore them over 2 days. Our findings describe how people use them in daily life, and how their material-driven qualities such as responsiveness, duration, rhythm, inertia, aliveness and range shape people's emotion identification, attribution, and regulation. Our findings led to four design implications including support for affective chronometry for both raise and decay time of emotional response, design for slowness, and for expressiveness.
In this paper we explore the fusion of a sophisticated user interface prototyping tool with a cognitive modeling approach based on the ACT-R cognitive architecture. The resulting tool allows the creation of complex interactive prototypes for which quantitative performance predictions can be derived by running cognitive models. Such models are automatically generated by monitoring a designer's interaction while completing a task scenario with a prototype. Using the example of a real-world user interface for a software-controlled riveting machine, we demonstrate the scope of the approach and compare model predictions to data gathered in two empirical studies. The encouraging goodness-of-fit of the predictions strongly suggests that the use of such simulated interactions with prototypes provides a promising and cost-effective approach to predict learning trajectories and performance data for routine tasks. We argue that predictive prototyping fosters the acceleration of iteration cycles without resulting in additional expenditures for UX designers.
Recent years have seen increased investment in data-driven farming through the use of sensors (hardware), algorithms (software), and networking technologies to guide decision making. By analyzing the discourse of 34 startup company websites, we identify four future visions promoted by data-driven farming startups: the vigilant farmer who controls all aspects of her farm through data; the efficient farmer who has optimized his farm operations to be profitable and sustainable; the enlightened farmer who achieves harmony with nature via data-driven insights; and the empowered farmer who asserts ownership of her farm's data, and uses it to benefit herself and her fellow farmers. We describe each of these visions and how startups propose to achieve them. We then consider some consequences of these visions; in particular, how they might affect power relations between the farmer and other stakeholders in agriculture--farm workers, nonhumans, and the technology providers themselves.
Conversational interfaces (CIs) have the potential to empower a broader spectrum of users to independently conduct visual analysis. Yet, recent approaches do not fully consider the user's characteristics. In particular, the objective of matching the user's language has been understudied in visual analysis. In order to close this gap, we introduce an answer space motivated by Grice's cooperative principle for framing personalized communication in complex data situations. We conducted both an online survey (N=76) to analyze communication preferences and a qualitative experiment (N=10) to investigate personalized conversations with an existing CI. In order to match the user's language properly, our results suggest to consider additional user characteristics along with their knowledge level. While mismatching communication preferences triggers negative reactions, a preference-aligned communication evokes positive reactions. As our analysis confirms the importance of matching the user's language in visual analysis, we provide design implications for future CIs.
This paper investigates whether voice assistants can play a useful role in the specialized work-life of the knowledge worker (in a biology lab). It is motivated both by promising advances in voice-input technology, and a long-standing vision in the community to augment scientific processes with voice-based agents. Through a reflection on our design process and a limited but fully functional prototype, Vitro, we find that scientists wanted a voice-enabled device that acted not a lab assistant, but lab equipment. Second, we discovered that such a device would need to be deeply embedded in the physical and social space in which it served scientists. Finally, we discovered that scientists preferred a device that supported their practice of "careful deviation" from protocols in their lab work. Through this research, we contribute implications for the design of voice-enabled systems in workplace settings.
Autonomous vehicles and robots are increasingly being deployed to remote, dangerous environments in the energy sector, search and rescue and the military. As a result, there is a need for humans to interact with these robots to monitor their tasks, such as inspecting and repairing offshore wind-turbines. Conversational Agents can improve situation awareness and transparency, while being a hands-free medium to communicate key information quickly and succinctly. As part of our user-centered design of such systems, we conducted an in-depth immersive qualitative study of twelve marine research scientists and engineers, interacting with a prototype Conversational Agent. Our results expose insights into the appropriate content and style for the natural language interaction and, from this study, we derive nine design recommendations to inform future Conversational Agent design for remote autonomous systems.
The number of households using stand-alone conversational agents is rapidly increasing. However, recent research revealed that in some of these households, use of agents decreases over time, and we know little about why. Therefore, we aim to understand how people utilize such devices in their daily lives and to explore the associated obstacles or difficulties. We conducted a long-term (12-week) user study in which we followed eight households, examining their use of Amazon Echo at home. From a series of diaries, surveys, and interviews with eight first-time users, we identified how their experiences changed over time and how conversational agents lose their presence at home. We found that voice interface, physical form, and at-home installation affect users' perceptions and expectations of smart speakers. Based on these findings, we discuss challenges and design opportunities for future at-home conversational agents.
The paper offers an ethnographic account of racial and cultural difference as sites to contest dominant practices of computing and technology. Specifically, we focus on how a collective of Indonesian biohackers position the care labor of a generation of women (referred to as Nenek-nenek in Bahasa Indonesia) to retrace the origins and boundaries of their making, hacking, and citizen science practices. The paper's contribution is to bring the study of the political economy of hacking and making into conversation with themes of racial and cultural difference in postcolonial computing across HCI, STS, and design. More specifically, the paper examines how Indonesian biohackers position situated histories and expertise as properly technological. Further, we show how their articulation of Indonesian difference was in turn appropriated by foreign hackers and commentators to envision tech futures against the status quo.
With the Syrian crisis entering its 8th year, refugees have become the focus of research across multiple disciplines, including design and HCI research. While some researchers have reflected upon designing with refugees, these accounts have been limited to conducting design workshops in formal spaces. Through reflecting on our experiences of conducting design research in informal refugee settlements in Lebanon we unpack lessons learnt, design practices and research approaches that facilitate design engagements with refugees. We highlight the value in participants configuring the design space, using a dialogical approach as well as creating a safe space for both participants and the researcher. We also reflect on the roles that researchers may take on when conducting similar research. By doing so we contribute specific design practices that may be transferrable to other similar contexts.
Information communication technologies for development (ICTD) can support people with chronic illnesses living in rural communities. In Kenya, ICTD use in areas where undetected cases of hypertension and high HIV infection rates exist is underexplored. Partnering with a health facility in Migori, Kenya, we report on the uses of technology in managing HIV. We see the use of technology to manage HIV was influenced by the roles and routines of patients and clinicians, trust between practitioners and patients, and sources of data that clinicians use for patient examination. We use these results to inform the design of technologies that can support patients living with comorbid HIV and hypertension, as well as their care providers, to manage their care in similar settings. We also reiterate the important mediatory role that community health volunteers (CHVs) can play in the adoption of technology as patients manage their condition(s) once out of hospital.