Whether you aspire to Weiser's [4] disappearing technology of ubiquitous computing or Negroponte's [3] atoms to bits or something in-between, there is something inescapably material about the digital. How much more so when the goal is hybridity: the bridging of atoms with bits. In this keynote, I examine the intersection of this hybridity with human values-in the present and in the longer term. I situate my remarks in the observation that our scientific and technological capacities have and will continue to far surpass our moral ones - that is our ability to use wisely and humanely the knowledge and tools that we develop. My reflections are grounded in the intellectual traditions of value sensitive design [1] and multi-lifespan design [2]. I take up such questions as: What human values and human impacts are implicated by hybridity and, in particular, hybrid materials? From this perspective of human values, are all hybrid materials the same? Is embedding computation in my glove the same as in my finger? Or are some materials deserving of special status? And, if so, in what ways? What cultural considerations come to the fore? What of non-humans? Turning to metaphor and mental models, what metaphors should be developed for hybrid materials? And in what ways will those highlight and hide embedded computation? What mental models should be associated with these materials? How will these models position people to think about their relationship with hybrid materials, including when to use them, how to adapt them, when to dispose of them, and when to eschew them? More generally, what constitutes responsible innovation around hybrid materials and how should we go about pursuing it? To gain purchase on these questions, I use the frame of value scenarios and anticipatory futures. My comments highlight the importance of taking the material impacts of the digital seriously. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.
Often when we talk about or do critical and speculative design the work is future oriented, looking towards technologies or conditions that are yet to come. While such work is important and compelling, there is also an opportunity to turn the critical and speculative impulse towards our contemporary moment, to consider how design might participate in alternate forms of living now or in the near present. I call this kind of work "design experiments in civics" because it uses design processes and products as a mode of inquiry into how we might differently structure our collective lives. What's more, given the pressing issues we are facing in 2019 there is an urgency to design that looks to address how we might live in the turbulent times. One move to make is to shift from thinking that design might solve the issues of climate-change or democracy or capitalism, and instead look to design as a way to contribute to different configurations of resources and action, through creativity and resourcefulness. In this talk I will share examples from my own work as well as the work of other artists and designers doing these "design experiments in civics." From these projects, I will draw out a set of themes for critical practices aimed at exploring and ¬enabling alternate forms of living and speculating on more diverse forms of togetherness in the contemporary moment and near-present. In particular, for the TEI audience I will address the ways in which explorations of materialities and interactions can express and support alternative collective practices and consider what the unique contributions of this research community might be to broad field of "design experiments in civics."
Current techniques for teaching spinal mobilisation follow the traditional classroom approach: an instructor demonstrates a technique and students attempt to emulate it by practising on each other while receiving feedback from the instructor. This paper introduces SpinalLog, a novel tangible user interface (TUI) for teaching and learning spinal mobilisation. The system was co-designed with physiotherapy experts to look and feel like a human spine, supporting the learning of mobilisation techniques through real-time visual feedback and deformation based passive haptic feedback. We evaluated Physical Fidelity, Visual Feedback, and Passive Haptic Feedback in an experiment to understand their effects on physiotherapy students' ability to replicate a mobilisation pattern recorded by an expert. We found that simultaneous feedback has the largest effect, followed by passive haptic feedback. The high fidelity of the interface has little effect, but it plays an important role in the perception of the system's benefit.
Reminiscence in dementia care often does not make use of interactive technology. In this work we present a study conducted in two dementia care facilities aimed at developing prototypes for reminiscence. We conducted contextual inquiries over a week to learn how 80 people with varying stages of dementia reminisce throughout the day. We present resulting needs and three tangible prototypes designed to facilitate reminiscence. These prototypes - the pyramid, the set of drawers and the jukebox - were tested in three exploratory field studies. We elaborate on the features of the prototypes that facilitated communication and reminiscence and share insights from failures that need to be considered when designing tangibles in the dementia context. To visualize both positive and negative aspects we introduce a model of successful interaction in the dementia context.
Customizable systems that let children and adults with disabilities control audio playback can support different forms of therapy, including music therapy and speech-language therapy. We present SenseBox, a low-cost, open-source, customizable hardware/software prototyping platform to turn everyday objects into audio triggers for people with disabilities. Users can add tags to physical objects that when in proximity to SenseBox trigger the playback of associated audio files. We designed SenseBox with input from three therapists and an assistive technology expert. We detail our human-centered design process that took place over 16 months and describe a detailed example use case where SenseBox was used to create an accessible music player for a child with cognitive disabilities. This project illustrates how to design physical computing prototyping platforms for therapists to create customized interfaces for their clients without requiring prior programming or design experience.
While improvised theatre (improv) is often performed on a bare stage, improvisers sometimes incorporate physical props to inspire new directions for a scene and to enrich their performance. A tech booth can improvise light and sound technical elements, but coordinating with improvisers' actions on-stage is challenging. Our goal is to inform the design of an augmented prop that lets improvisers tangibly control light and sound technical elements while performing. We interviewed five professional improvisers about their use of physical props in improv, and their expectations of a possible augmented prop that controls technical theatre elements. We propose a set of guidelines for the design of an augmented prop that fits with the existing world of unpredictable improvised performance.
We present IRelics, a tangible interaction platform for the popularization of field archeology. IRelics allows users to experience archaeological field work activities as a serious game by using a set of tangible tools. We developed an innovative LWIR (Long Wavelength Infrared Rays) sensing system, which implements the design of tangible tools that provide real manipulation experiences. By interacting with IRelics, a player may experience different archaeological activities such as excavation and cleaning. We conducted two observations to evaluate the usability and effectiveness at archeology popularizing. Findings suggest that the IRelics platform can enhance the engagement of the participants by providing a positive and interactive environment while teaching them unfamiliar knowledge.
The biological prototyping revolution is in motion, and new tools are needed to empower HCI researchers, designers, makers, and bio-enthusiasts to experiment with live organisms. We present OpenLH, a liquid handling system that empowers users to conduct accurate and repetitive experiments with live biology in a sterile, open, and affordable way. OpenLH integrates a commercially available robotic arm with custom 3D printed parts, a modified pipette, and a visual block-based programming interface. The system is as accurate as commercial liquid handlers, capable of repetitive tasks in micro-scale accuracy, easy to operate, and supports multi-materials including biomaterials, microorganisms and cell cultures. We describe the system's technical implementation and two custom interfaces. We demonstrate the system's impact for the HCI community with two use cases that include experimentation with live biology in non-traditional fields: visual design using pigment-expressing E.coli, and beer brewing experiment using serial dilution in home context.
This paper describes the underlying motivation, creation process, and evaluation outcomes of Shiva's Rangoli, a tangible storytelling installation that allows readers to impact the emotional tone of a narrative by sculpting the ambience of their space. Readers interact with a tangible interface that acts as a boundary object between the reader and the fictional world. We discuss how these kinds of interfaces can engage readers to feel like they are a part of the story, endow them with responsibility, and blur the line between real and fictional worlds.
Smart textiles integrate sensing and actuation components into their structures to bring interactivity to fabrics. We describe how we adapted two existing fiber arts techniques, double weaving and yarn plying, for the purpose of creating a woven textile that changes color in response to touch. We draw from this experience to make three core contributions: descriptions of our experiments plying yarns that change between three color states; descriptions of double weaving structures that allow us to support interactivity while hiding circuitry from view; and suggestions for how these techniques could be adapted and extended by other researchers to make richly crafted and technologically sophisticated fabrics.
Millions of children have challenges with anxiety that negatively impact their development, education and well-being. To address this challenge, we developed version 2.0 of Mind-Full, a wearable, mobile neurofeedback system, designed to teach young children to learn to self-regulate anxiety. We present a mixed methods evaluation of a seven week long intervention in schools. We report on a subset of outcome measures related to 10 children's anxiety and stress in the classroom and describe mediating socio-technological processes that may have impacted outcomes. Findings showed improvement in children's ability to self-regulate anxiety and reduced cortisol levels for some children. Qualitative findings suggested that children who made multimodal connections during system mediated learning and had teacher support for learning transfer responded well to the intervention. We suggest that framing mental health app design as a distributed, adaptive, socio-technological system enables designers to better meet individual's unique and changing mental health needs.
Aporia is a pluridisciplinary adaptation of a play of Bernard-Marie Koltès, led by plastic artist Alain Quercia in a live performance mixing different Art expressions, Scientific theories and Technologies. To propose original reflections about "otherness", the unique actor of this project embodies in turn all the characters of the play. His performance is supported by real-time pitch-shifting software, which allows him to modify his own voice without affecting its emotional details. These voice changes are remotely controlled by a smart sculpture integrated to the staging. In this paper, we present the used voice processing tool and the iterative co-design of the remote controller in collaboration with the director of Aporia.
Personalization of shoes is of increasing importance to designers, design researchers, and manufacturers as mass customization progresses towards ultra personalized product service systems. Many attempts have been made to design co-creation platforms that allow end users to personalize their own shoes. Those co-creation platforms primarily concentrate on color selection. This research takes a different approach and designs a toolkit for maker-oriented users to co-manufacture their own shoes. The toolkit was designed in different levels and deployed to makers via crowdsharing worldwide. Backers were surveyed before deployment and interviewed after two years to understand personalization over a larger amount of time with the research product. We find that users who have greater bespoke tools and materials in their toolkits are more likely to personalize their shoes while co-manufacturing. The research provides guidelines for researchers and designers creating toolkits, designing personalization product service systems/configurators and engaging in tangible bespoke processes.
This paper provides resources and design recommendations for optimizing position input for pressure sensor matrices, a sensor design often used in eTextiles. Currently applications using pressure matrices for precise continuous position control are rare. One reason designers opt against using these sensors for continuous position control is that when the finger transitions from one sensing electrode to the next, jerky motion, jumps or other non-linear artifacts appear. We demonstrate that interdigitation can improve transition behavior and discuss interpolation algorithms to best leverage such designs. We provide software for reproducing our sensors and experiment, as well as a dataset consisting of 1122 swipe gestures performed on 17 sensors.
We present an evaluation of three prototype tangible user interfaces (TUIs) for mealtimes for preschoolers. Building on past work identifying value tensions between adults' and children's perspectives at meals, we evaluated TUIs to address different tensions in this context (for example, the tension between children's interest in experimenting with food versus adults' interest in cleanliness). Thirteen pre-school children and their parents tried out the prototypes, as did an additional seven preschool teachers. Adults and children alike were excited by the prototypes; parents were surprised by children's increased food intake, and children used the prototypes to engage in artistic expression with food traces. However, children's increased consumption was extrinsically motivated by the prototypes rather than intrinsically motivated by hunger. We conclude that TUIs have the potential to enhance shared meals between children and adults but also have the potential to distract or persuade children in inappropriate ways. We present design guidance differentiating these two outcomes, such as incorporating the TUI into pre-existing mealtime objects and routines.
This paper presents the design rationale and concept development behind MemoryReel, a tangibly interactive desktop device that records special moments of online social interactions, between couples and friends over a long distance, and supports later reminiscence. In a human-centered design process started with a two-act design inquiry, we developed the design concept and implemented a medium-to-high-fidelity interactive prototype. We then invited 20 participants to experience the prototype and give comments. The design rationale together with the analysis of the exploratory user study findings provides insights on the dimensions and strategies of a design space for digital memories and long-distance relationships, with an emphasis on reminiscence support.
There seems to be a danger to carelessly replace routine tasks in homes through automation with IoT-technology. But since routines, such as watering houseplants also have positive influences on inhabitants' wellbeing, they should be transformed through carefully performed designs. To this end, an attempt to use technology for augmenting a set of houseplants' non-verbal communication capabilities is presented. First, we describe in detail how implicit interactions have been designed to support inhabitants in watering their plants through meaningful interactions. Then, we report on a field study with 24 participants, comparing two alternative design implementations based on contrasting embodied interaction technologies (i.e., augmented reality and embedded computing technology). The study results highlight shortcomings of today's smartphone mediated augmented reality compared to physical interface alternatives, considering measurements of perceived attractiveness and expected effects on determinants of wellbeing, and discusses potentials of combining both modalities for future solutions.
This paper explores methodological considerations of using constructive assemblies as a participatory design tool in order to explore new conceptual and material possibilities in saturated product categories. Constructive assemblies are tangible, reconfigurable, modular physical sets that can be combined in multiple different ways. They are valuable as collaborative tools on account of how they facilitate social inclusion by having a low skill barrier to participation. They are also valuable as generative tools, as users are able to quickly build complex constructions with the constituent components in different configurations. In this paper, we present a study in which a series of eight Participatory Design (PD) workshops were conducted with these assemblies, in which stakeholders explored possibilities for re-designing the everyday use of soap. Participants were drawn from the Fast Moving Consumer Goods (FMCG) industry and everyday soap users, and the workshops took three different configurations: video-conference and shared location, video conference and distributed location, and physical workshop and shared location. Our analysis highlights the consequentiality of the materiality of the assembly, the interplay between the specific workshop tasks, their setup and the physical constraints of the toolset, and emergent social behaviour from the experiments. We discuss these findings in relation to existing frameworks
Textural changes can be promising to give feedback on sportspeople's performance who needs less attention demanding modalities. However, previous research does not address which textural property's change would be more appropriate to give information in specific contexts, i.e. sports performance feedback. We focus on sport towels as a case to understand how to give feedback on sports performance through changes in textural properties. We address the gap by conducting experiments with 32 sportspeople to investigate (1) the textural properties (i.e. roughness) that can be perceived by sportspeople through a towel and (2) that can convey information on sportspeople's performance (i.e. smooth texture - good performance). The results indicate that hardness and bendability are appropriate to convey information about sportspeople's performance. To the best of our knowledge, this result is the first to explore that a change in the state of a textural property is able to give a specific feedback.
The inspiration for designing tangible, embedded, and embodied interaction can come from stranger places than needs assessments and observations. In conceptualizing home+, a pair of interactive furnishings enabling independent living, our design-research team found inspiration in modern painting and contemporary dance. Giorgio de Chirico's painting "Furniture in the Valley" (1928) captures the curious vitality and intermingling of furniture outside their normal confines that our team translated into this active pair of home+ robots and their intimate rapport. Two contemporary dance works, Andrea Miller's "(C)arbon" and Ohad Naharin's "STOP," explore the interrelationships of three bodies, coping with their circumstances, which inspired the interoperability and control strategies across the pair of robots and a human "in-the-loop." This paper aims to inspire other design researchers to seek, in strange places, inspirations for tangible, embedded, and embodied interaction.
Digital adaptation of physical games often includes complete digitization resulting in the replacement of physical movements with virtual counterparts using input devices. We believe that augmenting the game by adding digital elements while also preserving physical movements can enhance player engagement. We present Re-Twist, a digitally augmented version of Twister. We introduce the element of time and score in Re-Twist by using a pressure-sensitive Twister mat that communicates with a projected screen. To investigate the effect of digital augmentation, we conducted a comparative study between the original Twister and Re-Twist. 81% of 16 participants preferred Re-Twist over the original Twister because of increased competition and urgency created by digital augmentation. We discuss the effect of digital augmentation on the competition, social, and challenge aspect of the game. This can guide new ways of game design by relooking at similar augmentation of other traditional games.
Uncertainty is common when working with data and becomes more important as processing big data gains attention. However, no standard tangible interface element exists for inputting uncertain data. In this article, we extend the input space of two traditional TUIs: dial and slider. We present five designs that are based on dials and sliders and support uncertain input. We conduct focus group interviews to evaluate the designs. The interviews allow us to extend existing design requirements for parameter control UIs to support uncertain input.
Popularization of historical data implies going beyond the language and knowledge specific to this scientific field in order to make it understood. This applies to people's perception of the world since many changes have been applied to our surrounding environment. Communicating the way people used to hear, see and smell allows for a better retranscription of what once was, and a more effective connection between the teller and his audience. After the sensory data is gathered through heterography, comes the issue of assembling and presenting them. For the Bretez project, a reconstruction of eighteenth century Paris neighbourhood, the process is done manually. This paper describes a new approach to this aspect of historian's work. Using an immersive editing platform controlled through virtual reality devices, we built a pipeline dedicated to facilitating the creation process for historians, and transparently allowing for exporting the result to multiple video formats (immersive or not). The specific case of prototyping this pipeline with the Bretez project for future user experiments is described and discussed.
A child's early math development can stem from interactions with the physical world. Accordingly, current tangible interaction studies focus on preschool children's formal (symbolic) mathematics, i.e. number knowledge. However, recent developmental studies stress the importance of nonsymbolic number representation in math learning, i.e. understanding quantity relations without counting(more/less). To our knowledge, there are no tangible systems based on this math concept. We developed an initial tangible based mixed-reality(MR) setup with a small tabletop projector and depth camera. Our goal was observing children's interaction with the setup to guide our further design process towards developing nonsymbolic math training. In this paper, we present our observations from sessions with four 3-to-5 year old children and discuss their meaning for future work. Initial clues show that our MR setup leads to exploratory and mindful interactions, which might be generalizable to other tangible MR systems for child education and could inspire interaction design studies.
Mixed-reality haptic devices introduce a gateway to otherwise intangible virtual content, creating a life-like immersive experience. Congruent haptic sensation requires faithful integration of visual stimuli and perceived tactile sensation. Unfortunately, current commercial mixed-reality systems are unable to reproduce the physical sensation of fluid vessels, due to the shifting nature of fluid motion. To this end, we introduce SWISH, a novel type of ungrounded mixed-reality system, capable of affording the users a realistic haptic sensation of fluid behavior. We also present solutions to prominent challenges of rendering haptic fluid behavior, especially in coordinate translation and virtual adaptation to physical limitation. Our virtual-to-physical coupling uses Nvidia Flex's Unreal Engine integration, wirelessly controlling a motorized mechanical actuation system housed in a plastic "vessel''. In this paper we discuss the current state of SWISH and present results from our preliminary user study, followed by a description of our future planned phases.
Push buttons, sliders, switches, and dials-we use such controls everyday and everywhere, but we barely notice them. Expressive Tactile Controls is a research experiment with a series of controls that are augmented by giving human personalities. What if each control had a unique personality and they could express their emotion only through haptic feedback? How could our interaction with controls be improved? The research approached the question by constructing a series of button prototypes able to express themselves with varying tactile and kinesthetic feedback according to the interaction between the user and controls.
The landscape orientation of smartphone offers a better aspect ratio and extensive view for watching media and photography. However, it presents challenges of occlusion, reachability, and frequent re-gripping in one-handed interactions. To address these issues we took the opportunity of deformation gestures to interact with future flexible smartphones. A preliminary survey was conducted to understand one-handed landscape mode usage patterns. Then, the 1st study was conducted to identify 3 most preferred one-handed landscape mode grips. In the 2nd study, we gathered unique user-defined deformation gestures to identify the set of most natural and intuitive gestures corresponding to each grip. We also found 3 gestures that can be performed in more than one grip. Finally, we discuss the influence of the grips on performing gestures.
In this work in progress, we start to unpack the act of making in a digital fabrication process. In particular, one kind of digital fabrication - 3D printing - that is typically considered to be highly automated but in this case is not. In this process, a tension exists between our skills, the properties of a novel material and the capabilities of a novel machine. As design researchers, we navigated through the design space that emerged in this tension and explored how to 3D print in wood. In unpacking this tension between machine, material and designer, we pay attention to how the embodied nature of this process was essential for its development. We start to explore how we might explain the embodied act of making in the context of digital fabrication through the lense of ambiguity and resistance, notions previously used to unravel craftsmanship.
Over the past decades monetary transactions have become increasingly dematerialized. Nowadays payments should be "frictionless", as easy as possible, for example through NFC or biometric payment methods. Although frictionless payment methods hold advantages in terms of safety, convenience and privacy, they also pose problems as the inherent physicality of cash is shown to help people with budgeting and expense tracking. In this paper we describe our ongoing work in creating more mindful interactions, in this case by re-physicalizing part of digital transactions with the aim of combining the advantages of digital payment methods with those of physical money. We present the evaluation of a haptic interface for in-store digital payments. Results from a four-week lab evaluation indicate that people are capable of getting a feel for the absolute values of a payment as presented through our interface.
Interactive home based rehabilitation therapy is a promising treatment development for stroke survivors. As the impairment characteristics of each stroke survivor are unique, interactive rehabilitation systems need to be customized to the functional and movement quality outcome goals of the patient, and adaptable over time as therapy progresses. In this paper, we present our iterative co-design process creating a set of modular therapy objects and a rehabilitation protocol for upper extremity stroke survivors. Our objects and training protocol are adaptable components within a computer vision based interactive system that captures and analyzes stroke survivors completing rehabilitation activities. We report on findings from a pilot study with nine stroke survivors and a workshop with five physiotherapists where we highlight challenges in designing objects for impaired grasps, opportunities for aligning objects with activities of everyday living, and the responsibility of design sensitivity.
This article presents a paper-based Tangible User Interface (TUI) that facilitates the production of complex queries on a Cultural Heritage (CH) repository. The system helps to easily make use of the data elements and Boolean logic that describe the collections. This research presents a design methodology divided into two main phases: A User Experience (UX) and User Centred Design (UCD) where potential users' behaviours are analysed, followed by the development and evaluation of the TUI prototype. The TUI uses off the shelf electronics and a paper-based set of tokens to engage the user with the system, thus facilitating the exploration with CH collections through querying.
Mole Messenger is a pair of connected creatures that help children share and send messages to their loved ones who may be far away. Each Mole Messenger box houses a pushable mole. When one is pushed, the other pops up, as if traversing to the other side of the globe. The mole can be tapped to carry different color messages when pushed and can be used to play physically engaged games across distances. This paper describes the design and implementation of the system, explores various approaches within the design space, engages with the context and user experience, and meditates on the potential for positive psychological impact. Inspired by one family's story of separation and how they helped their children stay connected, we hope Mole Messenger can become an important tool for communication and healthy emotional development in children.
We present CairnFORM, a shape-changing cylindrical display that physicalizes forecasts of renewable energy availability. CairnFORM aims at creating and encouraging new socially-shared practices by displaying energy data in collective and public spaces, such as public places and workplaces. It is 360°-readable, and as a dynamic physical ring chart, it can change its cylindrical symmetry with quiet motion. We conducted two user studies. The first study clearly revealed the attractiveness of CairnFORM in a public place and its usability for a range task and for a compare task. Consequently, this makes CairnFORM useful to analyze renewable energy availability. The second study revealed that a non-constant motion speed is the better visualization stimulus at a workplace.
This paper introduces a tangible user interface (TUI) concept designed for child-oriented musical interaction and education by the name of COMB. The interaction concept of the interface is based upon the natural behavior and metaphors found in children's play during construction with building-blocks. This paradigm is used to increase the accessibility of the otherwise expert-focused digital and electronic music to children. We evaluated our prototype during two different study setups. We found preliminary indications that this concept fosters imitation during learning. Therefore, the usage of shape as a meaningful element of interaction could be a promising design strategy for interfaces addressing children. In this paper we present the theoretical foundation of the concept as well as technical details of the prototype. Furthermore, we discuss how this concept can be applied to increase accessibility of technology in various other domains.
Providing data visualization authoring tools for the general public remains an ongoing challenge. Inspired by block-printing, we explore how visualization stamps as a physical visualization authoring tool could leverage both visual freedom and ease of repetition. We conducted a workshop with two groups---visualization experts and non-experts---where participants authored visualizations on paper using hand-carved stamps made from potatoes and sponges. The low-fidelity medium freed participants to test new stamp patterns and accept mistakes. From the created visualizations, we observed several unique traits and uses of block-printing tools for visualization authoring, including: modularity of patterns, annotation guides, creation of multiple patterns from one stamp, and various techniques to apply data onto paper. We discuss the issues around expressivity and effectiveness of block-printed stamps in visualization authoring, and identify implications for the design and assembly of primitives in potential visualization stamp kits, as well as applications for future use in non-digital environments.
This paper presents Wattom, a highly interactive ambient eco-feedback smart plug that aims to support a more sustainable use of electricity by being tightly coupled to users' energy-related activities. We describe three use cases of the system: using Wattom to power connected appliances and understand the environmental impact of their use in real time; scheduling these power events; and presenting users with personal consumption data desegregated by device. We conclude with a user study in which the effectiveness of the plug's novel interactive capabilities is assessed (mid-air, hand-based motion matching). The study explores the effectiveness of Wattom and motion matching input in a realistic setup, where the user is not always directly ahead of the interface, and not always willing to point straight at the device (e.g., when the plug is at an uncomfortable angle). Despite not using a graphical display, our results demonstrate that our motion matching implementation was effective in line with previous work, and that participants' pointing angle did not significantly affect their performance. On the other hand, participants were more effective while pointing straight at Wattom, but reported not to finding this significantly more strenuating then when pointing to a comfortable position of their choice.
Several systems have illustrated the concept of interactive fabrication, i.e. rather than working through a digital editor, users make edits directly on the physical workpiece. However, so far the interaction has been limited to turn-taking, i.e., users first perform a command and then the system responds with physical feedback. In this paper, we present a first step towards interactive fabrication that changes the workpiece continuously while the user is manipulating it.
To achieve this, our system FormFab does not add or subtract material but instead reshapes it (formative fabrication). A heat gun attached to a robotic arm warms up a thermoplastic sheet until it becomes compliant; users then control a pneumatic system that applies either pressure or vacuum thereby pushing the material outwards or pulling it inwards. Since FormFab reshapes the workpiece continuously while users are moving their hands, users can interactively explore different sizes of a shape with a single interaction.
We present Mechamagnets, a technique for facilitating the design and fabrication of haptic and functional inputs for physical interfaces. This technique consists of a set of 3D printed spatial constraints which facilitate different physical movements, as well as unpowered haptic profiles created by embedding static magnets in 3D printed parts. We propose the Mechamagnets taxonomy to map the design space of this technique for designers and makers. Furthermore, we leverage the use of magnets by instrumenting these objects with linear Hall effect sensors to create functional digital inputs. We showcase Mechamagnets with a series of novel physical interfaces made with this technique.
One key concept of Tangible User Interaction is interactive surfaces, in which we transform a surface into an active interface between the physical and virtual world. In this paper we describe the design process of a novel interactive surface that can be used for movement therapy for children with Developmental Coordination Disorder (DCD). Children with DCD suffer from impairment in motor development which influences their overall movement quality and affects their daily life. Traditionally, the rehabilitation techniques and tools are often static, non-interactive, monotonous and unappealing to children. The purpose of this study is to design an interactive surface that combines physical motor exercises with digital games. Through an iterative co-creation process with patients and physiotherapists, we developed "Matti" as an interactive gaming mat to increase the motivation of the children during their rehabilitation, by linking the therapy exercises with digital games thus providing more engagement and better results.
With an increased interest in Organic User Interface, it becomes more and more important to help designers create deformable devices. One current challenge is the integration of rigid and soft electronic components with the device. In this paper, we propose to place electronic components based on how the user is interacting with the device, i.e., in which way the device is being deformed. For this, we developed a prototyping tool that takes as input a set of captured user gestures and a 3D model of the deformable device, and then visualizes the stress distribution resulting from the deformation during interaction. Our tool finally suggests where not to place components because the location is highly deformed when users interact (e.g., where not to place a rigid battery that would constraint interaction); or alternatively where to place components to sense deformation more accurately and efficiently (e.g., a bend sensor to detect a specific gesture, an energy harvesting component). We evaluated our approach by collecting interaction data from 12 users across 3 deformable devices (a watch, camera, and mouse) and applied the resulting stress distributions to the placement of selected electronic components.
With the move towards digital interventions for educational purposes, there has been a loss of tangible and material interfaces, the consequences of which are still being understood. Meanwhile, there is an ongoing lack of gender diversity within STEM-facing majors and careers. In response to this, we have created a physical prototype of BeadED Adventures, a system that uses a physical controller made up of jars of colorful beads to control modified Twine games that follows constructivist philosophies of learning and emphasizes player autonomy. By controlling the experience, the player creates a beaded bracelet that is personalized based on their choices within the game. In addition to the controller, we are creating an educational Twine game in which the player explores an abandoned castle, solving computational thinking puzzles to escape.
Mementos carry personal symbolic meaning and can be used to privately reflect on the past or share memories[28]. Older adults spend much time collecting mementos but spend less time telling stories behind the mementos. Once they pass away, stories behind mementos vanish with their owners. In this paper, we present Slots-Memento, a tangible device aiming to facilitate intergenerational story sharing and preservation for older adults. It builds on the metaphor of slot-machine, and integrates functions of memento photo display, story recording, and preservation. Our design process started with context inquiry, older adults and young adults were recruited, aiming to understand the status quo of their memento storytelling, and define design requirements. A preliminary evaluation was conducted, discussion and future work are in the final part.
Physical games involving blindfolded players have a timeless appeal and the restricting of perceptual channels can be insightful for players and observers regarding embodied experience. Wireless, mobile and wearable technologies open up further opportunities for designing bodily play experiences through exploiting sensory deprivation. To better understand the potential for movement-based games in which vision and/or audio is restricted, we iteratively developed and play-tested a series of three to four player chasing games. Based on our tests, we suggest the importance of ambiguity, proximity, and freedom of movement to support designing sensory deprivation games.
Squeeze interaction, defined as squeezing a soft object that affects computed feedback, is a promising interaction technique due to its expressive character. Skweezee for Processing is a Processing library that allows makers to implement squeeze interactions in a lightweight manner. The library offers a set of features based on data extraction algorithms and aims to preserve the dynamics of squeezes in the mapping from user action to system output. Distributing Skweezee for Processing as an open source library, we invite the community to further investigate the potential of squeeze interactions and to contribute to the extension and improvement of the library. We emphasize the potential for rich squeeze interaction by demonstrating a game that implements squeeze interaction as core mechanic. Additionally, we demonstrate the ease of implementing squeeze interactions in a variety of settings using the Skweezee for Processing library.
There is an increasing concern to improve the accessibility of artworks for blind people. Much of the effort has been focused on helping the visually impaired people to access the exhibition facilities, but the works of art hosted there are still difficult to experience for them. Particularly, the appreciation of visual artworks is hindered as blind visitors are not allowed to touch them in order to conserve their aesthetics and value. In this work we explore our findings using a prototype of a voice interactive multimodal guide designed to improve the accessibility of visual works of arts, such as paintings, for the blind people. The prototype identifies tactile gestures and voice commands that trigger audio descriptions and sounds while a person explores a 2.5D tactile representation of the artwork placed on the top surface of the prototype. Our preliminary findings include the results of eight user tests and Likert-type surveys.
This paper presents Telling the Bees, a prototype of an immersive media experience that explores the connections among ritual, tangible interfaces, and procedural interactivity. The project provides a basis for further exploration of hybrid interfaces in contexts of culture and tradition, specifically ritual traditions. We build upon the historical practice of -telling the bees," in which beekeepers and their families would share important news with their bees. A conversation between the interactor and the platform is mediated by tactile and vocal inputs with procedural audio-visual feedback. Our interface encourages body postures and input-response dynamics to bridge ritual tradition and digital immersion.
A flexible wooden device, "Trækvejret" which emulates a slow rate of breathing is placed in a coffee break room. This work examines related works on reflection, triggering reflection, and breathing and biofeedback technologies. We demonstrate how simple technology such as Trækvejret, which does not measure or give feedback about a user's breathing can nonetheless potentially be useful and provoking, encouraging reflection and potentially, behaviour change.
Recent innovations in fashion and smart textiles have contributed new visions of wearable computing, addressing the body through the cultural and social self. In this work, we draw on speculative design, maker technologies, and zoomorphism to explore how wearables might support sociability, and present Hooze, a fashion accessory that entices touch through its zoomorphic qualities and visual appearance. We describe our design and prototyping process, and reflect on how Hooze inspires transformative designs of wearables.
To situate the skills of the textile designer within the HCI-process, we present a case of a hand puppet with a purpose-woven smart textile pattern. The qualities found in traditional textile design are tacitly synthesized into the eTextile-design process. We see this mentality as having a natural dialogue with HCI-practice. The hand puppet consists of two layers: an inner sensor glove, designed to detect the movements of the user's fingers, and a woven outer layer that has a touch sensitive user interface integrated into its woven structure. The two interfaces can be operated simultaneously by two separate users; an adult and a child. Our interest is to understand better how the traditional textile design variables can be utilized in the user interface and -experience design. We aim towards the synthesis of woven eTextile design, consisting of user interface design, pattern design, sensor structure design and textile layout design.
In order to regain motor control, stroke patients should do various exercises that target at specific body functions. During daily exercising, they need assistance from either therapists or caregivers in setting tasks, providing feedback and other activities. Due to aging population, the demand for technology support in stroke recovery has rapidly increased in last decade. This paper presents a portable and interactive prototype designed to facilitate arm reaching exercise. It consists of a tabletop device and a game map that serves as visual guidance on arm movements. This device also provides light and sound feedback while patients can choose different game modes. Preliminary user trials support the implementation of tangible interactive training in rehab centers and further inspire us on building a tabletop training system.
The auto-adjustable bra combines new technologies such as soft robotics, computational design, and e-textiles to develop a bra which uses a pneumatic system to compensate severe asymmetries in breast volume (Anisomastia). In the present work, the bra aims to adjust to the measurements of a woman's breasts through air channels which are located in the internal mesh of the bra cup. This inflatable structure gives a balance in the breasts' volume while holding them. Furthermore, the conductive fabric which covers the bra cup works as a sensor to control the air injection system by comparing the fabric's resistance in both bra cups to send a signal to the air pump to stop the air injection. Thus, the bra keeps in shape and the air pump is disengaged. The project could have a global impact on women with Anisomastia by raising their self-esteem, recovering their emotional balance, and possibly enhancing their social and sexual relationships.
We describe two prototypes from the Baby Tango project: electronic textile toys that enable soft, tangible, full-body interaction. It presents interaction techniques that bridge the physical, the digital, and the social, as well as a case study in constructing interactive composite textiles. Given that the softness of the toy is a central design constraint, most of the circuit, including the sensors, is embroidered directly on the surface of the artifact using technical threads (with varying electro-mechanical properties) and a digital embroidery/laying machine. This submission includes design and technical details, as well as initial interaction design scenarios. The next steps of this project will explore how these toys could support the development of empathy in toddlers through embodied play. Further work is needed in order to develop background research, collaborations with early childhood researchers, as well as empirical studies. Future work will include the development of these studies; iterating aspects of interaction and play through participatory design; and improving technical design to focus on reliability, robustness, and durability.
Motorbike commuting is the new frontier for exploring digital technology where designing for embodied interaction takes on a more central role. In this paper, building on previous work on embodied self-monitoring, we present our ongoing work of designing a modular platform with a particular focus on real-time estimation and presentation of posture data while riding. In particular, we present "Bike Area Network' (BANk) as a system architecture to help guide the design of such a platform. We share our ongoing work as an invitation for the community of researchers and practitioners of designing for embodied interaction to further explore this new frontier of research.
In this paper we introduce the concept of Phone as an Emotive AI sidekick through a set of novel interactions where in a multi-axis actuated robotic charging stand we made acts as a 'body' for the AI on our phones. The novel interactions begin with how the robotic platform embodies and thus communicates our devices' understanding of the world, continues with the affordance for more varied expressive output, and works towards extending current phone functionality to be far-field, context-driven interactions.
Technostress is an emerging and significant psychological phenomenon associated with the use of technology. As humans increasingly encounter computational technology on a daily basis, there is a need to manage the anxieties and tensions that can result from these interactions. Using the lens of critical design, we created a design probe to explore this concept of technology induced stress. The probe builds on the topic of slow technology and embraces multisensory experiences as a tool for individuals to reflect on their relationship with technology.
Sound Shifting is an artistic research project that focuses on the physical representation of sound - this means the visualization and materialization of invisible phenomena that significantly shape our perception. We present a system that allows the transformation from sound into form in real-time by using a newly developed machine, the Audio Foam Cutter. This machine converts sound into polystyrene stripes that are arranged into sculptural objects. The resulting sound sculptures provide information about the represented sounds by their shape and aesthetic features and expand the range of our auditory perception to the tangible domain. The sound sculptures are snapshots of our soundscape and form a physical archive of sound representations. The Sound Shifting project aims to create an awareness of the materiality of sonic movements and affects.
The integration of new digital and physical fabrication tools with fine arts has the potential to provide new outlets for artistic expression, while at the same time raising questions about the role of material and process in artistic practice. In this work, we present Lithobox, a system that translates the traditional ceramic and lighting technique of lithophanes into a means of creating illuminated 3D models through a creative approach that utilizes both digital and tangible construction. Through work sessions with nine artists, we explored how the Lithobox fabrication process impacted the way artists manifest design ideas and engage in creative exploration in crafting. At the TEI arts track, we plan to show our system and the physical lithophanes from our work with artists. The attendees will likely discuss the design, material, and artistic aspects of our exhibit. From these discussions, our goal is to gain insight into beneficial directions for integrating digital technology into traditional fine arts practices.
Resonance Ver.M is an interactive video projection installation that allows the audience to have a peep into the artist's inside world. The project is the combination of a series of experimental executions including self-training, performance, EEG recording, interpretation of the bio-signal, subjective dream log, exhibition and interaction. The artwork shows an approach to the investigation of the dreams and a new form of "Human-Human interaction" through electrophysiological signals. It also introduces an innovative form of interaction between the reality and dream world, the conscious mind and the unconscious brain.
In this architectural research exploration, we challenge the notion of an interactive architectural surface as single-layered, two-dimensional interaction interface. Instead, we propose the notion of Interactive Voluminous Substance, which moves the interaction experience into four dimensions, shifting it from far-field, proximity-based interaction to a near-field, tactile one. We present four features of architectural expression that could potentially sustain the embodiment of this Substance: spatial positioning, geometry, expression, hybrid material composition and interaction design. If the future architectural interiors and exteriors are made from Voluminous Architectural Substance, how will it be to dwell with them? We propose two physical prototypes and two interaction stories as speculative objects probing this question.
In this paper we explore human-environment interrelationships by utilizing both hybrid materially-oriented approaches and metaphorical representations. Inspired by the 'canary in a coalmine' metaphor we developed a tangible interface to sense the environment and provide a physical experience. The design utilizes life-like characteristics, like shape memory alloys and feathers to illustrate the metaphor. The aim of this approach is to propose a tangible interface as a mediator to provoke empathy for environmental issues. For that, the paper addresses an interdisciplinary field of design, society and technology through an embodied system.
"After Words" is an interactive sound installation in which the viewer's participation causes basic units of speech to puncture the space, interrupting and overlapping yet remaining untied to any specific language. Inspired by early speech synthesizer technologies that could only emit syllables, consonants, and vowels, these structures house circuit boards which trigger audio files at random when the connected stand has air blown into it. The sounds emitted gesture towards a desire for language, but foreclose the possibility of meaning. Ultimately, "After Words" aims to create a space where sounds question logic, embrace nonsense, and untether the voice from language while revealing underlying connections between human and machine.
In this paper, we discuss the opportunities and challenges of creating responsive environments with Electroactive Polymers (EAPs). Our previous research on tools and methods for EAPs enabled us to develop two public installations: SOLO and Electric Animal Plant. Going beyond the demonstration of EAPs, these projects explore the aesthetics and interactivity of such shape-changing materials. We explain actuating and sensing capabilities of EAPs in these works, and we reflect on the response of the participants to the installations.
Ekphrasis is a mixed-media installation consisting of close-up videos of a heavily scarred body projected onto an elastic screen. The screen has one string attached to its centre, which in turn is attached to a stepper motor. The motor pulls the screen in a controlled random pattern, stretching it and releasing it, sometimes carefully, sometimes violently. The piece explores the relationship between the digital representation of the body and its corporeity, recreating the trauma that generated the scars on the medium itself, metaphorically recreating the human body on a new layer of representative abstraction.
A display device, whilst seemingly static shows a lot of movement. Movement or change on such screens is perceived due to rapid, successive change in frames. Outside of screens, perceived movement on static objects may be achieved through a combination of optics and illusions. Kyne is a series of visual experiments, involving the perception of change in static distorted artifacts. It involves varied methods of obscuring or masking parts of static artifacts and taking advantage of persistence of vision to animate these artifacts. The artifacts explored use paper, refreshable static non-illuminated e-ink displays, and laser cut acrylic, and their masks include transparencies and digital projections. This paper illustrates the experiments conducted with different artifact-mask pairings and speculates on possible future pairings.
In this art project, the ephemeral and intangible aspects of human's communication are represented by soap-bubble. The shapeless, intangible, and insubstantial speech - once the speech is shouted out through speaker's mouth it disappears unless someone hears it immediately, or even it is heard, the message will be forgotten as time goes - is transferred to a semi-tangible yet still fleeting bubble. The bubble machine that we created provides person-to-person and person-to-space interaction. The machine has a iris mechanism that varies its outlet size reacting to the participant's speech pattern as if it tries to talk something. Once the participant pauses, the machine blows out various sizes of bubble. The floating bubble represents the subtle state of a message from interpersonal communications that lies in the middle of real and digital world. Also, it creates a certain delay until it pops, which is a metaphor of our behavior that we often delay to send out text-messages through chatting apps. We believe that anyone can be an artist. By open-sourcing the details of fabrication process and materials, we want to encourage people to build the machine, interact with it at any locations, and use and modify it as a art tool for realizing their own ideas whether it is for art or not.
Digital information is typically only understood via lengthy explanations or data visualizations. It is my goal to use data to create physical objects that can not only represent the information that was used to create them but also to provide an interaction that can reinforce (or contradict) the core foundation of their creation. Manipulation of design with generative methods to create objects is one possible output. However, with the utilization of interactive technologies, digital information can be output as physical means through the control of objects interactively. This is particularly poignant in the format of an installation where multiples of objects can be controlled via data streams and have additional feedback through user interaction. My hybridization of materials, techniques and technologies culminates into fabrications that are full of contradictions. At once inviting and intimidating, the mechanisms and intricate designs draw the viewer in for a closer look. However, the interactively triggered movement starts very abruptly, tending to startle the unprepared. Furthermore, the my forms evoke things found in nature (wings, water, etc) that are typically symbols of freedom and lightness but these are secured to the built environment, unable to escape. These hybrids - part animal, part machine - are musings on the consequences of progress, a cyborg possibility. They are assembled from 3D printed, machined and laser-cut parts so that they appear cold and high-tech, in direct contrast to the organic forms that they animate. They utilize contemporary micro-controller and sensor technology to achieve the interactive abilities that complete their lifelike facade. Amidst my work, the viewer is no longer a passive viewer but becomes an active participant. The pieces are activated by aspects of the presence of people and an unknowing collaboration occurs - a symbiosis of sorts, between the art and the audience.
DOOR is an artwork that aims at exposing some of the social and political impact of artificial intelligence, computer vision, and automation. The project uses a commercially available computer vision system that predicts the interactor's ethnicity, and locks or unlocks itself depending on this prediction. The artwork showcases a possible use of computer vision making explicit the fact that every technological implantation crystallises a political worldview.
Transference is a hybrid computational system for improvised violin performance. With hardware sensors and digital signal processing (DSP), the system shapes live acoustic input and computer-generated sound. An electromyographic (EMG) sensor unobtrusively monitors movements of the left hand, while a custom glove controller tracks bowing gestures of the right arm. Through continuous musical gesture the performer is able to actuate and perturb streams of computationally transmuted audio. No additional layers of windowing or semantically-inflected processes of machine learning mediate this process. Remaining at the level of signal processing, the lack of windowed and/or statistical mediation creates a sense of fine-grain tactility and physical transduction for the performer. The strategies employed are sufficiently generalizable to apply to situations beyond those imagined and implemented here within the scope of augmented violin performance.
In "A Cyborg Manifesto," Donna Haraway describes how by the late twentieth century, humans have become hybridized with machines. While society has long been concerned with the ever-growing encroachment of technology into human activity, Haraway challenges this concern, proposing instead a kinship between organism and machine. The result is the cyborg, a hybrid body that fluidly transcends mechanical and organic boundaries. Why Should Our Bodies End at the Skin? for sensor-equipped dancer, robotic percussion, sound exciters, and live sound processing, explores these ideas of intersectionality and fluidity between organism and machine by connecting human action and mechanical tasks. This paper describes the creative framework and associated technologies involved in the development of the piece.
A Very Real Looper (AVRL) is an audio-only virtual reality (VR) interface inside of which a performer triggers and controls music through full-body movement. Contrary to how musical interfaces in VR are normally used, a performer using AVRL is not disconnected from their surrounding environment through immersion, nor is their body restrained by a head-mounted display. Rather, AVRL utilizes two VR sensors and the Unity game engine to map virtual musical sounds onto physical objects in the real world. These objects help the performer locate the sounds. Using two handheld VR controllers, these sounds can be triggered, looped, acoustically affected, or repositioned in space. AVRL thus combines the affordances of the physical world and a VR system with the reconfigurability of a game engine. This integration results in an expansive and augmented performance environment that facilitates full-body musical interactions.
This paper describes the Eighth Nerve Guitar, a combined hardware / software instrument designed for computer-mediated improvisational performance. Key conceptual, aesthetic, and technical concerns will be discussed and multiple projects that utilize this live performance instrument will be referenced. A new instrument, the Guitar-Like Object (in development) will also be introduced.
In Argentine tango, dancers typically respond to fixed musical recordings with improvised movements, each movement emerging in a wordless dialog between leader and follower. In the interactive work Machine Tango, this relation between dancers and music is inverted, enabling tango dancers to drive musical outcomes. Motion sensors are attached to dancer limbs, and their data is sent wirelessly to a computer, where algorithms turn the movement into sound. In doing so, the computer inserts itself in this on-going nonverbal conversation. Instead of traditional tango instruments such as the bandoneón, dancers generate and transform the sounds of typewriters and found sounds. System musical response to movement shifts during the dance, becoming more complex. The two dancers must navigate the resulting unstable musical structures as one body, responding with stylized tango movements. The difficulty of this task and the juxtaposition of the traditional with the experimental are integral to the performance aesthetic.
Upwell is a mixed reality performance that allows audience members to explore virtual and physical worlds with two dancers. The environment provokes the feeling of being underwater. A dancer with a conventional VR head-mounted display and wearable controllers can navigate around a room scale virtual reality setup and interacts with dynamic visual and sound elements. Since the dancer wears custom-made wearable controllers on the palms, she can make intricate gestures to develop direct relationships with bioluminescent particles in the virtual water. The other dancer only interacts with the visuals created by the VR dancer without realizing the virtual world. Upwell can be utilized as a single person art installation as well as a performance projecting different views on a projection screen.
We present Craftec, an extendable toolkit system focused on engaging older adults in maker technology by supporting their use of common crafting skills. Craftec is comprised of LilyPad Arduino-based toolkits to promote easier crafting with hard and soft mediums. We describe the system's design, a pilot test with 8 students, and 2 two-hour single session workshop evaluations by 17 older adults. We found Craftec facilitated the efficient integration of circuits within crafted items, including fewer short circuits as compared to a basic LilyPad Arduino kit. We discuss insights into creating an older adult toolkit focused on building and prototyping rather than facilitating STEM education.
While tree trunks are standardized as lumber, branches are typically chipped or burned. This paper proposes a workflow to upcycle such mundane and diverse natural material to architectural elements. Introducing an online design interface, we let users participate in the design and fabrication workflow from collecting branches to CNC milling. The branches are first scanned, and then key geometrical features are extracted and uploaded to the online game "BranchConnect". This application lets multiple non-expert users create 2D-layouts. At the point of intersection between two branches, the geometry of a lap joint and its cutting path are calculated on-the-fly. A CNC router mills out the joints accordingly, and the branches are assembled manually. Through this workflow, users go back-and-forth between physical and digital representations of tree branches. The process was validated with two case studies.
Communication theory suggests that people tend to interact with interactive artifacts as if these were human. For decades, this understanding has been applied to designing singular, embedded artifacts at a small physical scale. In this paper, we extend the same theory and practice to the dimension of space-to designing interactive, physical environments and their components. A conceptual ground for this is found in a "pattern language" developed by Alexander et al. for designing static physical environments. Upon this ground, we construct a systematic framework for designing "collaborative environments" shaped, as well, by our own concepts, Direct Mapping, Conveyed Mapping, and Space Agency, to strive for more human-human-like interactions between human beings and their physical surroundings. Our lab-based study generates a hypothetical design as qualitative validation of the framework, which has significance for designing tangible, embedded, and embodied interaction as it extends, inevitably, to the dimension of space, entertaining, serving, and augmenting us.
HCI in recent years has shown an increasing interest in decentering humans in design. This decentering is a response to concerns about environmental sustainability, technology obsolescence, and consumerism. Scholars have introduced theoretical notions such as natureculture from feminist technoscience. Yet how such theories translate into material design practices remains an open question. This research seeks to broaden the repertoire of nonanthropocentric design practices in HCI. Specifically, it draws on the natural processes of decomposition as a creative approach to develop and test design tactics. To do so, we curate and critique hundreds of examples of decomposition in architecture, design, textile, crafting, and food making. We observe that decomposition often depends on what we call a "scaffold," and we further propose four variants of it as design tactics: fragmenting, aging, liberating, and tracing. We then tested the tactics over a period of four months in a ceramics studio using diverse materials, with a mixture of successes and failures. We conclude by reflecting on how the design tactics might be deployed in nonanthropocentric HCI/design.
While previously proposed hardware on pin-based shape display has improved various technical aspects, there has been a clear limitation on the haptic quality of variable 'force' feedback. In this paper, we explore a novel haptic interaction design space with 'force' controlled shape display. Utilizing high performance linear actuators with current reading functionality, we built a 10 x 5 'force' shape display, named inFORCE, that can both detect and exert variable force on individual pins. By integrating closed-loop force control, our system can provide real-time variable haptic feedback in response to the way users press the pins. Our haptic interaction design space includes volumetric haptic feedback, material emulation, layer snapping, and friction. Our proposed interaction methods, for example, enables people to "press through'' computationally rendered dynamic shapes to understand the internal structure of 3D volumetric information. We also demonstrate a material property capturing functionality. Our technical evaluation and user study assesses the hardware capability and haptic perception through interaction with inFORCE. We also discuss application spaces that 'force' shape display can be used for.
In this paper, we present the Nebula, a garment that translates intentional gestures and implicit interaction into sound.Nebula is a studded cloak made from a heavy fabric that envelopes the wearer with many pendulous folds. We describe the design process, and specifically highlight three material investigations that show particularly important material connections that were fundamental to the experience of the garment: How the draping and construction of the garment allowed for implicit interaction, how the studs were used both as a computational sensing material and a strong visual component, and how the sound design exploited tangible material qualities in the garment. Finally, we discuss how such material investigations in general can be put to use. Both as a way to produce evocative connections in the materials available in design work, but also as a way to extract legible design intentions for other designers and researchers.
Wind simulations are typically one-off implementations for specific applications. We introduce WindyWall, a platform for creative design and exploration of wind simulations. WindyWall is a three-panel 90-fan array that encapsulates users with 270? of wind coverage. We describe the design and implementation of the array panels, discussing how the panels can be re-arranged, where various wind simulations can be realized as simple effects. To understand how people perceive "wind" generated from WindyWall, we conducted a pilot study of wind magnitude perception using different wind activation patterns from WindyWall. Our findings suggest that: horizontal wind activations are perceived more readily than vertical ones, and that people's perceptions of wind are highly variable-most individuals will rate airflow differently in subsequent exposures. Based on our findings, we discuss the importance of developing a method for characterizing wind simulations, and provide design directions for others using fan arrays to simulate wind.
Amongst the variety of (multi-modal) interaction techniques that are being developed and explored, the Motion Matching paradigm provides a novel approach to selection and control. In motion matching, users interact by rhythmically moving their bodies to track the continuous movements of different interface targets. This paper builds upon the current algorithmic and usability focused body of work by exploring the product possibilities and implications of motion matching. Through the development and qualitative study of four novel and different real-world motion matching applications --- with 20 participants --- we elaborate on the suitability of motion matching in different multi-user scenarios, the less pertinent use in home environments and the necessity for multi-modal interaction. Based on these learnings, we developed three novel motion matching based interactive lamps, which report on clear paths for further dissemination of the embodied interaction technique's experience. This paper hereby informs the design of future motion matching interfaces and products.
An exciting, expanding palette of hybrid materials is emerging that can be programmed to actuate by a range of external and internal stimuli. However, there exists a dichotomy between the physicality of the actuators and the intangible computational signal that is used to program them. For material practitioners, this lack of physical cues limits their ability to engage in a "conversation with materials" (CwM). This paper presents a creative workstation for supporting this epistemological style by bringing a stronger physicality to the computational signal and balance the conversation between physical and digital actors. The station utilizes a streaming architecture to distribute control across multiple devices and leverage the rich spatial cognition that a physical space affords. Through a formal user study, we characterize the actuation design practice supported by the CwM workstation and discuss opportunities for tangible interfaces to hybrid materials.
In this paper, we propose a different perspective on the use of support material: rather than printing support structures for overhangs, our idea is to make use of its transient nature, i.e. the fact that it can be dissolved when placed in a solvent, such as water. This enables a range of new use cases, such as quickly dissolving and replacing parts of a prototype during design iteration, printing temporary assembly labels directly on the object that leave no marks when dissolved, and creating time-dependent mechanisms, such as fading in parts of an image in a shadow art piece or releasing relaxing scents from a 3D printed structure sequentially overnight. Since we use regular support material (PVA), our approach works on consumer 3D printers without any modifications. To facilitate the design of objects that leverage dissolvable support, we built a custom 3D editor plugin that includes a simulation showing how support material dissolves over time. In our evaluation, our simulation predicted geometries that are statistically similar to the example shapes within 10% error across all samples.
We propose a method that helps an unskilled user to carve a physical replica of a 3D CAD model while only using manual cutting tools. The method starts by analyzing the input CAD model and generates a set of carving instructions. Then using a projector, we project the instructions sequentially one at a time to a block of material to guide the user in performing each of them. After each cutting step, we use the projector-camera setup to 3D scan the object after cutting. And automatically align the scanned point cloud to the CAD model, to prepare the position for the next instruction. We demonstrate a complete system to support this operation and show several examples manually carved while using the system.
Craft has emerged as an important reference point for HCI. To avoid a misrepresenting, all-encompassing application of craft to interaction design, this position paper first discerns craft from HCI. It develops material engagement and mediation as differentiating factors to reposition craft in relation to tangible interaction design. The aim is to clarify craft's relation to interaction design and to open up new opportunities and questions that follow from this repositioning.
The miniaturization of electronic technologies, as well as advances in organic and material science, have contributed to the development of composite, smart, and computational materials that create promising narratives for the future of ubiquitous computing. The goal of this one-day studio is to develop tools to acquire a deeper conceptual and critical understanding of materiality in HCI. The studio will draw on strategies from a broad range of sources including critical making, speculative design, experiential prototyping, and indigenous ontologies, in order to map out key questions and concerns. The studio will give the participants the opportunity to discuss the concept of Critical Materiality as a framework for developing tangible, embedded, and embodied interfaces, by brainstorming narratives around the past lives, current uses, and the future imaginaries of materials. Participants will co-develop a shared vocabulary and theoretical framework for Critical Materiality as a strategy to be deployed in conceiving and implementing HCI artifacts and experiences. The studio will culminate in the design and production of a deck of cards that propose keywords, questions, concerns, and opportunities for Hybrid Materials within this Critical Materiality framework. The deck of cards will be made available during the conference and distributed online.
Games are a great objective for research on tangible and embodied interaction as they not only have a long tradition in both the analogue and the digital game domain; they also perfectly represent the tangible and the embodiment through the nature of game play. In this Studio, we will go beyond the analogue characteristics of board games, not just by adding a visual digital layer, but by examining new game play mechanics for board games through digital technology. We will do this by introducing a new role in board games as an immersed 'character' in the game. For this, we use a very specific technology (virtual reality live streaming from a miniature perspective) as a new lens on (existing) board games. Through experimentation with and exploration of existing board games through this new lens, we will conceptualize and prototype game play mechanics for this new notion of game participation.
This studio will constructively explore hybrid publication formats for the dissemination of interaction design research in an academic context. We start from the premise that there is a gap between the richness of the artefacts created in interaction design research and the formal publication formats that exist today. Interactive artefacts and the experiences they elicit in use, possess a richness that is difficult to communicate in text and images alone. Through exploratory prototyping, we will explore this gap: what are key elements of artefact based research, that might benefit from alternative media, and how might we design such media to become useful supplements to existing dissemination formats? By bringing together a diverse group of participants from the TEI community to explore this topic, we aim at bringing forth a discussion on how we might stretch the boundaries of academic publication of interaction design research.
Olfactory experience has become a popular topic in TEI community as its novelty in inviting new type of storytelling and new dimension of medium in embodied interaction. This studio focuses on the experiential approach of designing interactive olfactory experience in real world contexts. We will show two case studies on how to adapt contextual inquiry into designing olfactory interactions. We will also demo prototypes and introduce a set of basic digital tools for creating olfactory interactions. Based on the tools and demo, the participants will work in groups to design olfactory experience for their proposed contexts and applications. The aim of the studio is to establish a community in discussion of a new perspective in designing and evaluating interactive olfactory experience in real contexts and applications.
In this workshop, participants will try their hand at a variety of tangible, embodied, and embedded sensing and feedback technologies including vibrotactile instruments, expressive mechatronics, gesturally modulated fields of light, sound, mist; realtime steerable immersive atmospheres. Working through hands-on experience by theme, participants will be introduced to compositional and experimental methodologies. In the second half of the workshop, participants will compose together some simple "ecosystems" using the Synthesis Center's hardware-software media choreography architecture (sc), in the iStage experimental theater-scale blackbox space.
Recommender algorithms deal with most of our contemporary culture consumption. Algorithmic Experience (AX) emerges in HCI to guide users' experience with algorithms. To the best of our knowledge, previous work on recommender systems does not consider tangible interfaces to support positive AX and better algorithmic awareness. The ongoing research proposes to expand the design space for the current AX debate by designing an embodied interface suited for movie recommender algorithms.
Embodied Interaction (EI) offers unique opportunities to uncover novel ways to achieve experiential learning whilst keeping students stimulated and engaged. Spatial abilities have been repeatedly demonstrated as a success predictor for educations and professions in Science, Technology, Engineering and Mathematics. However, many researchers argue that training and assessment of this pertinent reasoning skill is vastly underrepresented in the school curriculum. This paper presents TetRotation, a PhD centred on how affordances coming from Multimodal Analytics can be coupled with EI to nurture Mental Rotation (MR) skills. The overarching objectives of the project are two fold. First, the TetRotation Interaction Design study will highlight best practices identified through the assessment of efficiency, level of engagement and learning gains achieved when using gesture based EI to solve MR tasks. Next, in the TetRotation Game study, these design practices will guide the implementation of an interactive serious game purposed to support the development of MR skills. This research relies on mixed method techniques, including data collections from users' actions, like motion sensing, EEG, gaze tracking, video-recordings, click streams, interviews and surveys.
Collaborative learning has been shown to be beneficial for children's learning performance, increasing the curiosity and intensity of the ability of cooperation. Mixed-Reality with collaborative learning is the trending research topic in the Human-Computer Interaction (HCI) area. Additionally, with the rise of attention to global warming which brings in more extreme weather and climate conditions, the earth science education would be one of the crucial topics for the next generation. Moreover, there are few augmented reality and mixed reality applications on earth science subject. In this paper, we propose a Mixed Reality Tornado Simulator which offers an earth science education in a collaborative setting. Students and the instructor can cooperate on learning the knowledge of the formation and its damage cause on human-built structures, farming, and vegetation by using our mixed reality application with the Microsoft HoloLens. Also, for evaluating the learning performance in this mixed reality setting, we propose to study the cognitive load while the student is learning the abstract knowledge in Earth Science. We will separate the student into a control group and experimental groups and use different teaching instruments to test the difference of cognitive load.
Due to the ubiquity of IoT devices, privacy violations can now occur across our cyber-physical-social lives. An individual is often not aware of the possible privacy implications of their actions and commonly lacks the ability to dynamically control the undesired access to themselves or their information. Present approaches to privacy management lack an immediacy of feedback and action, tend to be complex and non-engaging, are intrusive and socially inappropriate, and are inconsistent with users' natural interactions with the physical and social environment. This results in ineffective end-user privacy management. To address these challenges, I focus on designing tangible systems, which promise to provide high levels of stimulation, rich feedback, direct, and engaging interaction experiences. This is achieved through intuitive awareness mechanisms and control interactions, conceptualizing interaction metaphors, implementing tangible interfaces for privacy management and demonstrating their utility within various real life scenarios.
The Internet of Things (IoT) [3, 16, 35] is a physical-digital ecosystem of compliant technologies and heterogeneous parts, enabling vast transmissions of data and candid, pervasive presence of things [40]. Fashion, on the other hand, is an embodied practice, an information medium of material, social, cultural, economic and political forces. Many wearables are outfitted to actuate data input sources as a visualised display. However, the impact and rich possibilities of fashion adornment practices for embodied data engagement in IoT wearables design have been overlooked. Introducing computational materials of the IoT to physical properties pushes this virtual system into the physical realm. In this research, an aesthetic criterion of haute couture practices considers the material turn [34, 39]. Design cases of fashion-led adornment style are a promising path to follow in the context of designing wearables for an Internet of Worn Things.
Through a multi-phased mixed method study with childhood cancer patients (8-12 years old) and their team of caregivers within US and Canadian hospitals we will explore (1) the ways the cancer experience impacts patient's social/emotional well-being, (2) how existing technologies fail to provide feelings of connectedness to friends/peers, and (3) how novel tangible technology could improve connectedness. We aim to (1) empower children with cancer by allowing them to voice their own experiences with isolation, loneliness, and loss of a normal childhood, as well as how technology may better support their needs, (2) contribute design knowledge about how to support meaningful social interaction and play that is age and 'ability' appropriate, and (3) provide insight for future design and evaluation studies by better understanding constraints/opportunities for social tangible technologies intended for use in a real world pediatric hospitals.
My PhD research focuses on intergenerational story sharing for older adults, which is conducted in a Research-through-Design manner. It includes five iterations: It started from an exploration prototype Interactive Gallery (1st iteration), and its findings helped to narrow down my research area and define my research question. To answer it, the 2nd iteration was continued, which was a co-design process of developing prototypes. 3rd and 4th iteration focused on older adults' life stories and memento stories respectively. While the 5th iteration is in the process, which aims to facilitate intergenerational story sharing and preservation in a sustainable manner.
This paper describes the Ph.D. research project "Tangible Signals" which is currently in its initial phase. This project investigates the dynamic physical representation and haptic feedback control of computer music and sound data using motorized and augmented objects. The research focuses on artistic and performative contributions that this approach offers. Special attention is given to the collaboration with visually impaired people, as they are very limited in the use of exclusively GUI-based interaction. On the following pages the background of the project will be described and methods, research questions and work progress will be presented.
The emergence of social networks and apps has reduced the importance of physical space as a locus for social interaction. In response, we introduce transFORM, a cyber-physical environment installed in under-used, outdoor, public spaces. transFORM embodies our understanding of how a responsive, cyber-physical architecture can augment social relationship and increase place attachment. In this paper we critically examine the social interaction problem in the context of our increasingly digital society, present our ambition, and introduce our prototype, which we will iteratively design, and test. Cyber-physical interventions at large scale in public spaces are an inevitable future, and this paper serves to establish the fundamental terms of this frontier.
Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established "mainstream'' interaction paradigms. In my work, I propose interfaces that support complementary interaction modalities, representational forms and scales, toward hybrid systems which are more legible and actionable than any strategy considered separately. I describe systems involving dial-like tangibles, which are passive and active, and systems combining interaction modalities such as tangible and multi-touch, and tangible and VR interaction. I briefly describe some of the planned and performed evaluations, and draw lessons from an already completed study involving a computationally-mediated scientific poster platform with content developed by undergraduate students.
My doctoral research examines the use of biometrics as a design intervention in games to increase social closeness. I have built an overlay for Twitch that reveals streamer biometrics to spectators (All the Feels [16]). Using this tool, along with additional design interventions, I plan to explore and expand communication possibilities between players, streamers and spectators in order to facilitate social connection. In this abstract I briefly describe the three projects I am currently working on through my doctoral work: In the Same Boat, Twitch Plays 'All the Feels', and Turnin' the Beat Around.
This paper presents the rationale and current progress of my Ph.D. dissertation: "design interactions between robot surfaces and human designers." This specific topic serves as a case study trying to explore the question of how to design an interactive and partially intelligent space. We proposed the concept of "space agent" defined as "interactive and intelligent environments perceived by users as human agents" based on communication theories. Built upon this concept, we proposed a design framework for interactive environments. Then we further explored literatures about what space agent could contribute to human users specifically for the case of interior designers' work space. Research questions and research designs are introduced in this paper, followed by the discussions of experiments design.