Soft robotics deals with interaction with environments that are uncertain and vulnerable to change, by easily adapting to the environment with soft materials. However, softness inherently has large degrees of freedom which greatly complicates the motion generation. There has been no underlying principle for understanding the motion generated of soft robots. A big gap between rigid robots and soft robots has been that the kinematics of rigid robots can be defined using analytical methods, whereas the kinematics of soft robots were hard to be defined. Here, I suggest to use the minimum energy path to explain the kinematics of soft robots. The motion of soft robots follow the path where minimum energy that is required to create deformation. Hence, by plotting an energy map of a soft robot, we can estimate the motion of the soft robot and its reaction to external disturbances. Although it is extremely difficult to plot the energy map of a soft robot, this framework of using energy map to understand the motion of a soft robot can be a basis for unifying the method of explaining the motion generated by soft robots as well as rigid robots. A concept of physically embodied intelligence is a way to simplify the motion generate by soft robots by embodying intelligence into the design. Better performance can be achieved with a simpler actuation by using this concept. In nature, there are few example that exhibit this property. Flytrap, for example, can close its leaves quickly by using bistability of the leaves instead of just relying on the actuation. Inchworm achieves adaptive gripping with its prolegs by using the buckling effect. In this talk, I will give an overview of various soft robotic technologies, and some of the soft robots with physically embodied intelligence that are being developed at SNU Biorobotics Lab and Soft Robotics Research Center. These examples will show that the concept of physically embodied intelligence simplifies the design and enables better performance by exploiting the characteristics of the material and the minimum energy path concept can be a powerful tool to explain the motion generated by these robots.
We evaluate the emotional expression capacity of skin texture change, a new design modality for social robots. In contrast to the majority of robots that use gestures and facial movements to express internal states, we developed an emotionally expressive robot that communicates using dynamically changing skin textures. The robot's shell is covered in actuated goosebumps and spikes, with programmable frequency and amplitude patterns. In a controlled study (n = 139) we presented eight texture patterns to participants in three interaction modes: online video viewing, in person observation, and touching the texture. For most of the explored texture patterns, participants consistently perceived them as expressing specific emotions, with similar distributions across all three modes. This indicates that a texture changing skin can be a useful new tool for robot designers. Based on the specific texture-to-emotion mappings, we provide actionable design implications, recommending using the shape of a texture to communicate emotional valence, and the frequency of texture movement to convey emotional arousal. Given that participants were most sensitive to valence when touching the texture, and were also most confident in their ratings using that mode, we conclude that touch is a promising design channel for human-robot communication.
We investigate robots using infrasound, low-frequency vibrational energy at or near the human hearing threshold, as an interaction tool for working with people. Research in psychology suggests that the presence of infrasound can impact a person's emotional state and mood, even when the person is not acutely aware of the infrasound. Although often not noticed, infrasound is commonly present in many situations including factories, airports, or near motor vehicles. Further, a robot itself can produce infrasound. Thus, we examine if infrasound may impact how people interpret a robot's social communication: if the presence of infrasound makes a robot seem more or less happy, energetic, etc., as a result of impacting a person's mood. We present the results from a series of experiments that investigate how people rate a social robot's emotionally-charged gestures, and how varied levels and sources of infrasound impact these ratings. Our results show that infrasound does have a psychological effect on the person's perception of the robot's behaviors, supporting this as a technique that a robot can use as part of its interaction design toolkit. We further provide a comparison of infrasound generation methods.
We investigate if an on-screen agent that reacts to a teleoperator's driving performance (e.g., by showing fear during poor driving) can influence teleoperation. Serving as a kind of virtual passenger, we explore if and how this agent's reactions may impact teleoperation. Our design concept is to create an emotional response in the operator (e.g., to feel bad for the agent), with the ultimate goal of shaping driving behavior (e.g., to slow down to calm the agent). We designed and implemented two proof-of-concept agent personas that react differently to operator driving. By conducting an initial proof-of-concept study comparing our agents to a base case, we were able to observe the impact of our agent personas on operator experience, perception of the robot, and driving behavior. While our results failed to find compelling evidence of changed teleoperator behavior, we did demonstrate that emotional on-screen agents can alter teleoperator emotion. Our initial results support the plausibility of passenger agents for impacting teleoperation, and highlight potential for more targeted, ongoing work in applying social techniques to teleoperation interfaces.
In this paper, we draw attention to the social functions of emotional display in interaction. A review of HRI papers on emotion suggests that this perspective is rarely taken in the field, but that it is useful to account for the context-and culture-dependency of emotional expression. We show in two case studies that emotional display is expected to occur at very specific places in interaction and rather independently from general emotional states, and that different cultures have different conventions regarding emotional expression. Based on conversation analytic work and the results from our case studies, we present design recommendations which allow the implementation of specific emotional signals for different human-robot interaction situations.
In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots---inferred capability and intention---and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.
Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.
When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study (n = 82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.
This paper explores peoples attitudes about a service robot using customer data in conversation. In particular, how can robots understand privacy expectations in social grey-areas like cafes, which are both open to the public and used for private meetings? To answer this question, we introduce the Theater Method, which allows a participant to experience a "violation" of their privacy rather than have their actual privacy be violated. Using Python to generate 288 scripts that fully explored our research variables, we ran a large-scale online study (N=4608). To validate our results and ask more in-depth questions, we also ran an in-person follow-up (N=20). The experiments explored social & data-inspired variables such as data source, the positive or negative use of that data, and whom the robot verbally addressed, all of which significantly predicted participants' social attitudes towards the robot's politeness, consideration, appropriateness, and respect of privacy. Body language analysis and cafe-related conversation were the lowest risk, but even more extreme data channels are potentially okay when used for positive purposes.
Worldwide, manufacturers are reimagining the future of their workforce and its connection to technology. Rather than replacing humans, Industry 5.0 explores how humans and robots can best complement one another's unique strengths. However, realizing this vision requires an in-depth understanding of how workers view the positive and negative attributes of their jobs, and the place of robots within it. In this paper, we explore the relationship between work attributes and automation goals by engaging in field research at a manufacturing plant. We conducted 50 face-to-face interviews with assembly-line workers (n = 50), which we analyzed using discourse analysis and social constructivist methods. We found that the work attributes deemed most positive by participants include social interaction, movement and exercise, (human) autonomy, problem solving, task variety, and building with their hands. The main negative work attributes included health and safety issues, feeling rushed, and repetitive work. We identified several ways robots could help reduce negative work attributes and enhance positive ones, such as reducing work interruptions and cultivating physical and psychological well-being. Based on our findings, we created a set of integration considerations for organizations planning to deploy robotics technology, and discuss how the manufacturing and HRI communities can explore these ideas in the future.
Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal tradeoffs across shared autonomy strategies that may inform future investigations in this space.
In a controlled experiment, participants (n = 60) competed in a monotonous task with an autonomous robot for real monetary incentives. For each participant, we manipulated the robot's performance and the monetary incentive level across ten rounds. In each round, a participant's performance compared to the robot's would affect their odds in a lottery for the monetary prize. Standard economic theory predicts that people's effort will increase with prize value. Furthermore, recent work in behavioral economics predicts that there will also be a discouragement effect, with stronger robot performance discouraging human effort, and that this effect will increase with prize. We were not able to detect a meaningful effect of monetary prize, but we found a small discouragement effect, with human effort decreasing with increased robot performance, significant at the p < 0.005 level. Using per-round subjective indicators, we also found a positive effect of robot performance on its perceived competence, a negative effect on the participants' liking of the robot, and a negative effect on the participants' own competence, all at p < 0.0001. These findings shed light on how people may exert work effort and perceive robotic competitors in a human-robot workforce, and could have implications on labor supply decisions and the design of compensation schemes in the workplace.
As robots, both individually and in groups, become more prevalent in everyday contexts (e.g., schools, workplaces, educational and caregiving institutions), it is possible that they will be perceived as outgroups, or come into competition for resources with humans. Research indicates that some of the psychological effects of intergroup interaction common in humans translate to human-robot interaction (HRI). In this paper, we examine how intergroup competition, like that among humans, translates to HRI. Specifically, we examined how Number of Humans (1, 3) and Number of Robots (1, 3) affect behavioral competition on dilemma tasks and survey ratings of perceived threat, emotion, and motivation (fear, greed, and outperformance). We also examined the effect of perceived group entitativity (i.e., cohesiveness) on competition motivation. Like in social psychological literature, these results indicate that groups of humans (especially entitative groups) showed more greed-based motivation and competition toward robots than individual humans did. However, we did not find evidence that number of robots had an effect on fear-based motivation or competition against them unless the robot groups were perceived as highly entitative. Our data also show the intriguing finding that participants displayed more fear of and competed slightly more against robots that matched their number. Future research should more deeply examine this novel pattern of results compared to one-on-one HRI and typical group dynamics in social psychology.
Human-robot interactions that involve multiple robots are becoming common. It is crucial to understand how multiple robots should transfer information and transition users between them. To investigate this, we designed a 3 × 3 mixed-design study in which participants took part in a navigation task. Participants interacted with a stationary robot who summoned a functional (not explicitly social) mobile robot to guide them. Each participant experienced the three types of robot-robot interaction: representative (the stationary robot spoke to the participant on behalf of the mobile robot), direct (the stationary robot delivered the request to the mobile robot in a straightforward manner), and social (the stationary robot delivered the request to the mobile robot in a social manner). Each participant witnessed only one type of robot-robot communication: silent (the robots covertly communicated), explicit (the robots acknowledged that they were communicating), or reciting (the stationary robot said the request aloud). Our results show that it is possible to instill socialness in and improve likability of a functional robot by having a social robot interact socially with it. We also found that covertly exchanging information is less desirable than reciting information aloud.
In this paper we sought to understand how the display of different levels of warmth and competence, as well as, different roles (opponent versus partner) portrayed by a robot, affect the display of emotional responses towards robots and how they can be used to predict future intention to work. For this purpose we devised an entertainment card-game group scenario involving two humans and two robots (n=54). The results suggest that different levels of warmth and competence are associated with distinct emotional responses from users and that these variables are useful in predicting future intention to work, thus hinting at the importance of considering warmth and competence stereotypes in Human-Robot Interaction.
Many of the problems we face are solved in small groups. Using decades of research from psychology, HRI research is increasingly trying to understand how robots impact the dynamics and outcomes of these small groups. Current work almost exclusively uses humanoid robots that take on the role of an active group participant to influence interpersonal dynamics. We argue that this has limitations and propose an alternative design approach of using a peripheral robotic object. This paper presents Micbot, a peripheral robotic object designed to promote participant engagement and ultimately performance using nonverbal implicit interactions. The robot is evaluated in a 3 condition (no movement, engagement behaviour, random movement) laboratory experiment with 36 three-person groups (N=108). Results showed that the robot was effective in promoting not only increased group engagement but also improved problem solving performance. In the engagement condition, participants displayed more even backchanneling toward one another, compared to no movement, but not to the random movement. This more even distribution of backchanneling predicted better problem solving performance.
This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.
How do you work with a robot millions of miles away to make scientific discoveries on a planet you've never set foot on? Although much work in human-robot interaction describes the meaningful relationships that humans forge with their robots in one-on-one encounters, when robots venture into places where humans cannot go --- in search and rescue operations, ocean voyages, or even into space --- they do so as part of a large human team. Decisions about what the robot should do and where it should go are therefore the result of large group interactions instead of individual human cognition: the realm of organizational sociology.
This talk draws on over a decade of ethnography with NASA's robotic spacecraft missions, specifically focusing on the Mars Exploration Rover mission. Going behind the scenes of the mission, I show the meaning-making, emotional connections, and embodied synergy that scientists develop as they work with their robots millions of miles away on a daily basis. This begins by learning to see through the robots' "eyes" on another planet, yet the peculiar social arrangement of mission work produces a deeper connection to the robot explorers too: one that is no doubt responsible for the extraordinary success and unexpected longevity of the mission team.
Studying robotic spacecraft teams demonstrates how humans build a particular and peculiar empathy with their robotic colleagues that goes beyond anthropomorphism. Instead, this case points to the many and unusual ways in which organizations participate in our interactions with, understanding of, and ultimately care for the robots we work with in everyday life.
Drones are becoming ubiquitous and offer support to people in various tasks, such as photography, in increasingly interactive social contexts. We introduce drone.io, a projected body-centric graphical user interface for human-drone interaction. Using two simple gestures, users can interact with a drone in a natural manner. drone.io is the first human-drone graphical user interface embedded on a drone to provide both input and output capabilities. This paper describes the design process of drone.io. We present a proof of concept, drone-based implementation, as well as a fully functional prototype for a drone tour-guide scenario. We report drone.io's evaluation in three user studies (N=27) and show that people were able to use the interface with little prior training. We contribute to the field of human-robot interaction and the growing field of human-drone interaction.
This paper presents a gesture set for communicating states to novice users from a small Unmanned Aerial System (sUAS) through an elicitation study comparing gestures created by participants recruited from the general public with varying levels of experience with an sUAS. Previous work in sUAS flight paths sought to communicate intent, destination, or emotion without focusing on concrete states such as Low Battery or Landing. This elicitation study uses a participatory design approach from human-computer interaction to understand how novice users would expect an sUAS to communicate states, and ultimately suggests flight paths and characteristics to indicate those states. We asked users from the general public (N=20) to create gestures for seven distinct sUAS states to provide insights for human-drone interactions and to present intuitive flight paths and characteristics with the expectation that the sUAS would have general commercial application for inexperienced users. The results indicate relatively strong agreement scores for three sUAS states: Landing (0.455), Area of Interest (0.265), and Low Battery (0.245). The other four states have lower agreement scores, however even they show some consensus for all seven states. The agreement scores and the associated gestures suggest guidance for engineers to develop a common set of flight paths and characteristics for an sUAS to communicate states to novice users.
The consumer drone market has shown a constant growth for the past few years. As drones become increasingly autonomous and used for a growing number of applications, it is crucial to establish parameters for collocated human-drone interaction. Prior research showed how ground robots should approach a person to initiate interaction. This paper builds upon prior work and investigates how a flying robot should approach a person. Because of the flight capability, drones present more approach parameters than ground robots and require further study to properly design future interactions. Since research methodologies in aerial robotics are not well established, we present a taxonomy of methodologies for human-drone interaction studies to guide future researchers in the field. This paper then contributes a user study (N=24) investigating proximity, speed, direction, and trajectory towards a comfortable drone approach. We present our study results and design guidelines for the safe approach of a drone in a collocated indoor environment.
Driver interaction with increasingly automated vehicles requires prior knowledge of system capabilities, operational know-how to use novel car equipment and responsiveness to unpredictable situations. With the purpose of getting drivers ready for autonomous driving, in a between-subject study sixty inexperienced participants were trained with an on-board video tutorial, an Augmented Reality (AR) program and a Virtual Reality (VR) simulator. To evaluate the transfer of training to real driving scenarios, a test drive on public roads was conducted implementing, for the first time in these conditions, the Wizard of Oz (WoZ) protocol. Results suggest that VR and AR training can foster knowledge acquisition and improve reaction time performance in take-over requests. Moreover, participants' behavior during the test drive highlights the ecological validity of the experiment thanks to the effective implementation of the WoZ methodology.
In previous work, researchers have repeatedly demonstrated that robots' use of deictic gestures enables effective and natural human-robot interaction. However, new technologies such as augmented reality head mounted displays enable environments in which mixed-reality becomes possible, and in such environments, physical gestures become but one category among many different types of mixed reality deictic gestures. In this paper, we present the first experimental exploration of the effectiveness of mixed reality deictic gestures beyond physical gestures. Specifically, we investigate human perception of videos simulating the display of allocentric gestures, in which robots circle their targets in users' fields of view. Our results suggest that this is an effective communication strategy, both in terms of objective accuracy and subjective perception, especially when paired with complex natural language references.
Teleoperation remains a dominant control paradigm for human interaction with robotic systems. However, teleoperation can be quite challenging, especially for novice users. Even experienced users may face difficulties or inefficiencies when operating a robot with unfamiliar and/or complex dynamics, such as industrial manipulators or aerial robots, as teleoperation forces users to focus on low-level aspects of robot control, rather than higher level goals regarding task completion, data analysis, and problem solving. We explore how advances in augmented reality (AR) may enable the design of novel teleoperation interfaces that increase operation effectiveness, support the user in conducting concurrent work, and decrease stress. Our key insight is that AR may be used in conjunction with prior work on predictive graphical interfaces such that a teleoperator controls a virtual robot surrogate, rather than directly operating the robot itself, providing the user with foresight regarding where the physical robot will end up and how it will get there. We present the design of two AR interfaces using such a surrogate: one focused on real-time control and one inspired by waypoint delegation. We compare these designs against a baseline teleoperation system in a laboratory experiment in which novice and expert users piloted an aerial robot to inspect an environment and analyze data. Our results revealed that the augmented reality prototypes provided several objective and subjective improvements, demonstrating the promise of leveraging AR to improve human-robot interactions.
It is well established that a robot's visual appearance plays a significant role in how it is perceived. Considerable time and resources are usually dedicated to help ensure that the visual aesthetics of social robots are pleasing to users and helps facilitate clear communication. However, relatively little consideration is given to how the voice of the robot should sound, which may have adverse effects on acceptance and clarity of communication. In this study, we explore the mental images people form when they hear robots speaking. In our experiment, participants listened to several voices, and for each voice they were asked to choose a robot, from a selection of eight commonly used social robot platforms, that was best suited to have that voice. The voices were manipulated in terms of naturalness, gender, and accent. Results showed that a) participants seldom matched robots with the voices that were used in previous HRI studies, b) the gender and naturalness vocal manipulations strongly affected participants' selection, and c) the linguistic content of the utterances spoken by the voices does not affect people's selection. This finding suggests that people associate voices with robot pictures, even when the content of spoken utterances was unintelligible. Our findings indicate that both a robot's voice and its appearance contribute to robot perception. Thus, giving a mismatched voice to a robot might introduce a confounding effect in HRI studies. We therefore suggest that voice design should be considered more thoroughly when planning spoken human-robot interactions.
This paper examines relationships between perceptions of warmth and competence, emotional responses, and behavioral tendencies in the context of social robots. Participants answered questions about these three aspects of impression formation after viewing an image of one of 342 social robots in the Stanford Social Robots Database. Results suggest that people have similar emotional and behavioral reactions to robots as they have to humans; impressions of the robots' warmth and competence predicted specific emotional responses (admiration, envy, contempt, pity) and those emotional responses predicted distinct behavioral tendencies (active facilitation, active harm, passive facilitation, passive harm). However, the predicted relationships between impressions and harmful behavioral tendencies were absent. This novel asymmetry for perceptions and intentions towards robots is deliberated in the context of the computers as social actors framework and opportunities for further research are discussed.
Robots are machines and as such do not have gender. However, many of the gender-related perceptions and expectations formed in human-human interactions may be inadvertently and unreasonably transferred to interactions with social robots. In this paper, we investigate how gender effects in people's perception of robots and humans depend on their emotional intelligence (EI), a crucial component of successful human social interactions. Our results show that participants perceive different levels of EI in robots just as they do in humans. Also, their EI perceptions are affected by gender-related expectations both when judging humans and when judging robots with minimal gender markers, such as voice or even just a name. We discuss the implications for human-robot interactions (HRI) and propose further explorations of EI for future HRI studies.
It has long been assumed that when people observe robots they intuitively ascribe mind and intentionality to them, just as they do to humans. However, much of this evidence relies on experimenter-provided questions or self-reported judgments. We propose a new way of investigating people's mental state ascriptions to robots by carefully studying explanations of robot behavior. Since people's explanations of human behavior are deeply grounded in assumptions of mind and intentional agency, explanations of robot behavior can reveal whether such assumptions similarly apply to robots. We designed stimulus behaviors that were representative of a variety of robots in diverse contexts and ensured that people saw the behaviors as equally intentional, desirable, and surprising across both human and robot agents. We provided 121 participants with verbal descriptions of these behaviors and asked them to explain in their own words why the agent (human or robot) had performed them. To systematically analyze the verbal data, we used a theoretically grounded classification method to identify core explanation types. We found that people use the same conceptual toolbox of behavior explanations for both human and robot agents, robustly indicating inferences of intentionality and mind. But people applied specific explanatory tools at somewhat different rates and in somewhat different ways for robots, revealing specific expectations people hold when explaining robot behaviors.
For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.
Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.
Successful robotic assistive feeding depends on reliable bite acquisition and easy bite transfer. The latter constitutes a unique type of robot-human handover where the human needs to use the mouth. This places a high burden on the robot to make the transfer easy. We believe that the ease of transfer not only depends on the transfer action but also is tightly coupled with the way a food item was acquired in the first place. To determine the factors influencing good bite transfer, we designed both skewering and transfer primitives and developed a robotic feeding system that uses these manipulation primitives to feed people autonomously. First, we determined the primitives' success rates for bite acquisition with robot experiments. Next, we conducted user studies to evaluate the ease of bite transfer for different combinations of skewering and transfer primitives. Our results show that an intelligent food item dependent skewering strategy improves the bite acquisition success rate and that the choice of skewering location and the fork orientation affects the ease of bite transfer significantly.
In this paper we present the results of an experimental study investigating the application of human persuasive strategies to a social robot. We demonstrate that robot displays of goodwill and similarity to the participant significantly increased robot persuasiveness, as measured objectively by participant behaviour. However, such strategies had no impact on subjective measures concerning perception of the robot, and perception of the robot did not correlate with participant behaviour. We hypothesise that this is due to difficulty in accurately measuring perception of a robot using subjective measures. We suggest our results are particularly relevant for the design and development of socially assistive robots.
Since the diagnosis of autism spectrum disorder (ASD) relies heavily on behavioral observations by experienced clinician, we seek to investigate whether parts of this job can be autonomously performed by a humanoid robot using only sensors available on-board. To that end, we developed a robot-assisted ASD diagnostic protocol. In this work we propose the Partially observable Markov decision process (POMDP) framework for such protocol which enables the robot to infer information about the state of the child based on observations of child's behavior. We extend our previous work by developing a protocol POMDP model which uses tasks of the protocol as actions. We devise a method to interface protocol and task models by using belief at the end of a task to generate observations for the protocol POMDP, resulting in a hierarchical POMDP framework. We evaluate our approach through an exploratory study with fifteen children (seven typically developing and eight with ASD).
Intimate relationships are integral parts of human societies, yet many relationships are in distress. Couples counseling has been shown to be effective in preventing and alleviating relationship distress, yet many couples do not seek professional help, due to cost, logistic, and discomfort in disclosing private problems. In this paper, we describe our efforts towards the development a fully automated couples counselor robot, and focus specifically on the problem of identifying and processing "collaborative responses", in which a human couple co-construct a response to a query from the robot. We present an analysis of collaborative responses obtained from a pilot study, then develop a data-driven model to detect end of collaborative responses for regulating turn taking during a counseling session. Our model uses a combination of multimodal features, and achieves an offline weighted F-score of 0.81. Finally, we present findings from a quasi-experimental study with a robot facilitating a counseling session to promote intimacy with romantic couples. Our findings suggest that the session improves couples intimacy and positive affect. An online evaluation of the end-of-collaborative-response model demonstrates an F-score of 0.72.
Over the summer of 2018, CHARISMA Robotics Laboratory at Oregon State University invited a Theater Artist to collaborate on two interdisciplinary robot theater productions using ChairBots and human performers. Both productions shared in a three-week development period, the same development team and performing robots, and culminated in live performances. This paper acts as a companion to the video documentation of these productions, addressing the novelty and contributions, both technical and creative, of dancing with robot furniture.
Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.
In an eight-week STEAM education program for elementary school children, kids worked on musical theater projects with a variety of robots. The program included 4 modules about acting, dancing, music & sounds, and drawing. Twenty-five children grades K-5 participated in this program. Children were excited by the program and they demonstrated collaboration and peer-to-peer interactive learning. In the future, we plan to add more robust interaction and more science and engineering experiences to the program. This program is expected to promote STEM education in the informal learning environment by combining it with arts and design.
The fact that we can see each other face to face in the distance has become so natural with the appearance of video calls. However, the desire to touch each other in the distance is far from being resolved. We developed a pair of devices that can transmit touch sense through the shoulder such as tapping, caressing, and pressing. The touch sensors on a telepresence robot transmit touch feedback to a remote user wearing a haptic vest via wireless communication.
Humans use several social cues, both verbal and nonverbal, to draw the attention of others. In this study we investigate whether similar behaviors can also be effectively used by a social robot for drawing attention. To this end, we setup a welcoming humanoid (Pepper) at the entrance of a university building. Its behaviors include one or a combination of behavioral modalities (i.e., a waving gesture, utterance and movement). These behaviors are triggered automatically based on people detection software which tracks passersby and monitors their head keypoints. Our findings imply that Pepper draws more attention when displaying a combination of modalities.
We consider a quadrotor equipped with a forward-facing camera, and an user freely moving in its proximity; we control the quadrotor in order to stay in front of the user, using only camera frames. To do so, we train a deep neural network to predict the drone controls given the camera image. Training data is acquired by running a simple hand-designed controller which relies on optical motion tracking data.
We propose a system to control robots in the users proximity with pointing gestures---a natural device that people use all the time to communicate with each other. Our system has two requirements: first, the robot must be able to reconstruct its own motion, e.g. by means of visual odometry; second, the user must wear a wristband or smartwatch with an inertial measurement unit. Crucially, the robot does not need to perceive the user in any way. The resulting system is widely applicable, robust, and intuitive to use.
The video describes the novel concept of "culturally competent robotics", which is the main focus of the project CARESSES (Culturally-Aware Robots and Environmental Sensor Systems for Elderly Support). CARESSES a multidisciplinary project whose goal is to design the first socially assistive robots that can adapt to the culture of the older people they are taking care of. Socially assistive robots are required to help the users in many ways including reminding them to take their medication, encouraging them to keep active, helping them keep in touch with family and friends. The video describes a new generation of robots that will perform their actions with attention to the older person's customs, cultural practices and individual preferences.
To avoid putting humans at risk, there is an imminent need to pursue autonomous robotized facilities with maintenance capabilities in the energy industry. This paper presents a video of the ORCA Hub simulator, a framework that unifies three types of autonomous systems (Husky, ANYmal and UAVs) on an offshore platform digital twin for training and testing human-robot collaboration scenarios, such as inspection and emergency response.
This video illustrates the large-scale experiment of the L2TOR project that will be presented at the HRI 2019 conference. The experiment aimed to investigate how 192 Dutch 5-year-old children could learn 34 English words from a NAO robot in 7 lessons. The experiment compared 4 conditions: 1) robot using iconic gestures, 2) robot without iconic gestures, 3) tablet only, and 4) a control group. The results revealed that children could learn more English words in all experimental conditions compared to the control group. The three experimental conditions did not show any significant differences regarding the learning outcomes.
Dark patterns are a recent phenomenon in the field of interaction design, where design patterns and behavioral psychology are deployed in ways that deceive the user. However, the current corpus of dark patterns literature focuses largely on screen-based digital interactions and should be expanded to include home robots. In this paper, we apply the concept of dark patterns to the 'cute' aesthetic of home robots and suggest that their design constitutes a dark pattern in HRI by (1) emphasizing short-term gains over long-term decisions; (2) depriving users of some degree of conscious agency at the site of interaction; and (3) creating an affective response in the user for the purpose of collecting emotional data. This exploratory paper expands the current library of dark patterns and their application to new technological interfaces into the domain of home robotics in order to establish the grounds for an ethical design practice in HRI.
Social robots are being designed to use human-like communication techniques, including body language, social signals, and empathy, to work effectively with people. Just as between people, some robots learn about people and adapt to them. In this paper we present one such robot design: we developed Sam, a robot that learns minimal information about a person's background, and adapts to this background. Our in-the-wild study found that people helped Sam for significantly longer when it adapted to match their background. While initially we saw this as a success, in re-considering our study we started seeing a different angle. Our robot effectively deceived people (changed its story and text), based on some knowledge of their background, to get more work from them. There was little direct benefit to the person from this adaptation, yet the robot stood to gain free labor. We would like to pose the question to the community: is this simply good robot design, or, is our robot being manipulative? Where does the ethical line lay between a robot leveraging social techniques to improve interaction, and the more negative framing of a robot or algorithm taking advantage of people? How can we decide what is good here, and what is less desirable?
In this paper, we introduce publicly available human-robot teaming datasets captured during the summer 2018 season using our Aquaticus testbed. Our Aquaticus testbed is designed to examine the interactions between human-human and human-robot teammates while situated in the marine environment in their own vehicles. In particular, we assess these interactions while humans and fully autonomous robots play a competitive game of capture the flag on the water. Our testbed is unique in that the humans are situated in the field with their fully autonomous robot teammates in vehicles that have similar dynamics. Having a competition on the water reduces the safety concerns and cost of performing similar experiments in the air or on the ground. By having the competitions on the water, we create a complex, dynamic, and partially observable view of the world for participants while in their motorized kayak. The main modality for teammate interaction is audio to better simulate the experience of real-world tactical situations - ie fighter pilots talking to each other over radios. We have released our complete datasets publicly so that we can enable researchers throughout the HRI community that do not have access to such a testbed and may have expertise other than our own to leverage our datasets to perform their own analysis and contribute to the HRI community.
Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that humans naturally perceive robots as moral agents. Accordingly, we propose that language-capable autonomous robots are uniquely positioned among technologies to significantly impact human morality. We therefore argue that it is imperative that language-capable robots behave according to human moral norms and communicate in such a way that their intention to adhere to those norms is clear. Unfortunately, the design of current natural language oriented robot architectures enables certain architectural components to circumvent or preempt those architectures' moral reasoning capabilities. In this paper, we show how this may occur, using clarification request generation in current dialog systems as a motivating example. Furthermore, we present experimental evidence that the types of behavior exhibited by current approaches to clarification request generation can cause robots to (1) miscommunicate their moral intentions and (2) weaken humans' perceptions of moral norms within the current context. This work strengthens previous preliminary findings, and does so within an experimental paradigm that provides increased external and ecological validity over earlier approaches.
Social robots, understood as the category of embodied robots extending into social domains through reciprocal social interaction, are still a practical novelty in most of these domains today. However, the phenomenon of novelty effects is only eclectically and peripherally addressed within most research into social human-robot interaction, and even when treated more extensively, it is usually framed as a source of noise in need of reduction. In this paper, I will argue a reframing of novelty effects that posits the phenomenon as a valuable source of information. In the first part of the paper, I present a tentative account of what I call experiential novelty in order to illustrate (1) that novelty should be understood as an 'original feature of experience', (2) that novelty arises in the engagement between an experiencer and an experience where the experiencer's possessed knowledge is inadequate in making sense of the experience, and (3) that novelty effects should be seen as cognitive and behavioural expressions of a 'search for meaning'. In the latter part of the paper, I discuss some of the current research lines within social human-robot interaction research from the perspective of this account of novelty. Most notably, I argue that retrospectively, the account holds explanatory utility in analyzing many of the findings in this research-field, and prospectively, the account holds generative utility in pointing to new ways in which participant experiences of novelty may be employed in research.
After nearly ten years "Keepon" appeared in HRI, a question arises: what about the music used in the video "Keepon goes Seoul-searching?" The song "superfantastic" with the lyrics "possibility, it's a mystery" is written by peppertones in 2005, a Korean duo band celebrating their fifteenth anniversary in 2019. Superfantastic is a song of hope, inspired by the concerns and worries they had while starting a career in popular music, and conveys a message to "keep on dreaming," as "your biggest dreams, they might come to reality." This talk shares life stories of uncertainty: how the band started, what unexpected outcomes the band has witnessed, which decisions led them to this point, and what issues they are currently facing. As one member of the band is also involved in computer music research focusing on mobile music interaction, additionally this talk also covers current research topics and how it is to live as a multidisciplinary person spanning popular music, television, and computer music research.
Robotic Musicianship research at Georgia Tech Center for Music Technology (GTCMT) focuses on the construction of autonomous and wearable robots that can analyze, reason, and generate music. The goal of our research is to facilitate meaningful and inspiring musical interactions between humans and artificially creative machines. In this talk I present the work conducted by the Robotic Musicianship Group at GTCMT over the last 15 years, highlighting the motivation, research questions, platforms, methods, and underlining guidelines for our work.
In order to interact with people in a natural way, a robot must be able to link words to objects and actions. Although previous studies in the literature have investigated grounding, they did not consider grounding of unknown synonyms. In this paper, we introduce a probabilistic model for grounding unknown synonymous object and action names using cross-situational learning. The proposed Bayesian learning model uses four different word representations to determine synonymous words. Afterwards, they are grounded through geometric characteristics of objects and kinematic features of the robot joints during action execution. The proposed model is evaluated through an interaction experiment between a human tutor and HSR robot. The results show that semantic and syntactic information both enable grounding of unknown synonyms and that the combination of both achieves the best grounding.
Fundamental to robotics is the debate between model-based and model-free learning: should the robot build an explicit model of the world, or learn a policy directly? In the context of HRI, part of the world to be modeled is the human. One option is for the robot to treat the human as a black box and learn a policy for how they act directly. But it can also model the human as an agent, and rely on a "theory of mind" to guide or bias the learning (grey box). We contribute a characterization of the performance of these methods under the optimistic case of having an ideal theory of mind, as well as under different scenarios in which the assumptions behind the robot's theory of mind for the human are wrong, as they inevitably will be in practice. We find that there is a significant sample complexity advantage to theory of mind methods and that they are more robust to covariate shift, but that when enough interaction data is available, black box approaches eventually dominate.
One of the advantages of teaching robots by demonstration is that it can be more intuitive for users to demonstrate rather than describe the desired robot behavior. However, when the human demonstrates the task through an interface, the training data may inadvertently acquire artifacts unique to the interface, not the desired execution of the task. Being able to use one's own body usually leads to more natural demonstrations, but those examples can be more difficult to translate to robot control policies.
This paper quantifies the benefits of using a virtual reality system that allows human demonstrators to use their own body to perform complex manipulation tasks. We show that our system generates superior demonstrations for a deep neural network without introducing a correspondence problem. The effectiveness of this approach is validated by comparing the learned policy to that of a policy learned from data collected via a conventional gaming system, where the user views the environment on a monitor screen, using a Sony Play Station 3 (PS3) DualShock 3 wireless controller as input.
This paper investigates Active Robot Learning strategies that take into account the effort of the user in an interactive learning scenario. Most research claims that Active Learning's sample efficiency can reduce training time and therefore the effort of the human teacher. We argue that the performance driven query selection of standard Active Learning can make the job of the human teacher difficult, resulting in a decrease in training quality due to slowdowns or increased error rates. We investigate this issue by proposing a learning strategy that aims to minimize the user's workload by taking into account the flow of the questions. We compare this strategy against a standard Active Learning strategy based on uncertainty sampling and a third strategy being an hybrid of the two. After studying in simulation the validity and the behavior of these approaches, we conducted a user study where 26 subjects interacted with a NAO robot embodying the presented strategies. We reports results from both the robot's performance and the human teacher's perspectives, observing how the hybrid strategy represents a good compromise between learning performance and user's experienced workload. Based on the results, we provide recommendations on the development of Active Robot Learning strategies going beyond robot's performance.
Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing input methods as well as a novel input method that we introduce, the instrumented tongs. We detail the design specifications for our method and present a user study that compares it against three common input methods: free-hand manipulation, kinesthetic guidance, and teleoperation. Study results show that instrumented tongs provide high quality demonstrations and a positive experience for the demonstrator while offering good correspondence to the target robot.
Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science. However, most work makes the assumption that humans are acting (noisily) optimally with respect to their preferences. Such approaches can fail when people are themselves learning about what they want. In this work, we introduce the assistive multi-armed bandit, where a robot assists a human playing a bandit task to maximize cumulative reward. In this problem, the human does not know the reward function but can learn it through the rewards received from arm pulls; the robot only observes which arms the human pulls but not the reward associated with each pull. We offer sufficient and necessary conditions for successfully assisting the human in this framework. Surprisingly, better human performance in isolation does not necessarily lead to better performance when assisted by the robot: a human policy can do better by effectively communicating its observed rewards to the robot. We conduct proof-of-concept experiments that support these results. We see this work as contributing towards a theory behind algorithms for human-robot interaction.
State-of-the-art social robot navigation algorithms often lack a thorough experimental validation in human environments: simulated evaluations are often conducted under unrealistically strong assumptions that prohibit deployment in real world environments; experimental demonstrations that are limited in sample size do not provide adequate evidence regarding the user experience and the robot behavior; field studies may suffer from the noise imposed by uncontrollable factors from the environment; controlled lab experiments often fail to properly enforce challenging interaction settings. This paper contributes a first step towards addressing the outlined gaps in the literature. We present an original experiment, designed to test the implicit interaction between a mobile robot and a group of navigating human participants, under challenging settings in a controlled lab environment. We conducted a large-scale, within-subjects design study with 105 participants, exposed to three different conditions, corresponding to three distinct navigation strategies, executed by a telepresence robot (two autonomous, one teleoperated). We analyzed observed human and robot trajectories, under close interaction settings and participants' impressions regarding the robot's behavior. Key findings, extracted from a comparative statistical analysis include: (1) evidence that human acceleration is lower when navigating around an autonomous robot compared to a teleoperated one; (2) the lack of evidence to support the conventional expectation that teleoperation would be humans' preferred strategy. To the best of our knowledge, our study is unique in terms of goals, settings, thoroughness of evaluation and sample size.
Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.
Most research on human-robot handovers focuses on how the robot should approach human receivers and notify them of the readiness to take an object; few studies have investigated the effects of different release behaviors. Not releasing an object when a person desires to take it breaks handover fluency and creates a bad handover experience. In this paper, we investigate the effects of different release behaviors. Specifically, we study the benefits of a proactive release, during which the robot actively detects a human grasp effort pattern. In a 36-participant user study1, results suggest proactive release is more efficient than rigid release (which only releases when the robot is fully stopped) and passive release (the robot detects pulling by checking if a threshold value is reached). Subjectively, the overall handover experience is improved: the proactive release is significantly better in terms of handover fluency and ease-of-taking.
We modeled a robot's approaching behavior for giving admonishment. We started by analyzing human behaviors. We conducted a data collection in which a guard approached others in two ways: 1) for admonishment, and 2) for a friendly purpose. We analyzed the difference between the admonishing approach and the friendly approach. The approaching trajectories in the two approaching types are similar; nevertheless, there are two subtle differences. First, the admonishing approach is slightly faster (1.3 m/sec) than the friendly approach (1.1 m/sec). Second, at the end of the approach, there is a 'shortcut' in the trajectory. We implemented this model of the admonishing approach into a robot. Finally, we conducted a field experiment to verify the effectiveness of the model. A robot is used to admonish people who were using a smartphone while walking. The result shows that significantly more people yield to admonishment from a robot using the proposed method than from a robot using the friendly approach method.
Many new technologies are being built to support people with dementia. However, they largely focus on the people with dementia; consequently, informal caregivers, one of the most important stakeholders in dementia care, remain invisible within the technology design space. In this paper, we present a six-month long, community-based design research process where we collaborated with dementia caregiver support groups to design robots for dementia caregiving. The contributions of this paper are threefold. First, we broaden the context of dementia robot design to give a more prominent role to informal family caregivers in the co-design process. Second, we provide new design guidelines that contextualize robots within the family caregiving paradigm, which suggest new roles and behaviors of robots. These include lessening emotional labor by communicating information caregivees do not want to hear (e.g., regarding diet or medication) or providing redirection during emotionally difficult times, as well as facilitating positive shared moments. Third, our work found connections between certain robot attributes and their relationship to the stage of dementia a caregivee is experiencing. For example, caregivers wanted their robots to facilitate interaction with their caregivees in early stages of dementia, yet be in the background. However, for later stages of dementia, they wanted robots to replace caregiver-caregivee interaction to lessen their emotional burden, and be foregrounded. These connections provide important insights in to how we think about adaptability and long-term interaction in HRI. We hope our work provides new avenues for HRI researchers to studying robots for dementia caregivers by engaging in community-based design.
Robots in real-world environments may need to adapt context-specific behaviors learned in one environment to new environments with new constraints. In many cases, copresent humans can provide the robot with information, but it may not be safe for them to provide hands-on demonstrations and there may not be a dedicated supervisor to provide constant feedback. In this work we present the SAIL (Simulation-Informed Active In-the-Wild Learning) algorithm for learning new approaches to manipulation skills starting from a single demonstration. In this three-step algorithm, the robot simulates task execution to choose new potential approaches; collects unsupervised data on task execution in the target environment; and finally, chooses informative actions to show to co-present humans and obtain labels. Our approach enables a robot to learn new ways of executing two different tasks by using success/failure labels obtained from naïve users in a public space, performing 496 manipulation actions and collecting 163 labels from users in the wild over six 45-minute to 1-hour deployments. We show that classifiers based low-level sensor data can be used to accurately distinguish between successful and unsuccessful motions in a multi-step task (p < 0.005), even when trained in the wild. We also show that using the sensor data to choose which actions to sample is more effective than choosing the least-sampled action.
Social robots have been designed to engage with older adults and children separately, but their use for inter-generational (IG) interactions, especially in nonfamilial settings, has not been studied. In addition to the challenge of simultaneously meeting the varied needs and preferences of older adults and children, the dynamic nature of these settings makes the use of robots for IG activities difficult. This paper presents a first exploratory study meant to inform the design and use of social robots for IG activities in nonfamilial settings by analyzing interviews and observations conducted at a co-located preschool and assisted living-dementia care center. Interactions occurring with and around robots were analyzed, particularly focusing on whether they fulfill the community's goals of providing children and older adults with engaging opportunities for IG contact. Findings suggest integrating intermittent pauses and breaks in interactions with the robot and unstructured collaborative robot-assisted activities can meet the needs of both generations, and call for greater community involvement in HRI for IG research.
This paper defines a dual computational framework to nonverbal communication for human-robot interactions. We use a Bayesian Theory of Mind approach to model dyadic storytelling interactions where the storyteller and the listener have distinct roles. The role of storytellers is to influence and infer the attentive state of listeners using speaker cues, and we computationally model this as a POMDP planning problem. The role of listeners is to convey attentiveness by influencing perceptions through listener responses, which we computational model as a DBN with a myopic policy. Through a comparison of state estimators trained on human-human interaction data, we validate our storyteller model by demonstrating how it outperforms current approaches to attention recognition. Then through a human-subjects experiment where children told stories to robots, we demonstrate that a social robot using our listener model more effectively communicates attention compared to alternative approaches based on signaling.
We present a large-scale study of a series of seven lessons designed to help young children learn English vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains.
Both perceptual mechanisms (e.g., threat detection/avoidance) and social mechanisms (e.g., fears fostered via negative media) may explain the existence of the uncanny valley; however, existing literature lacks sufficient evidence to decide whether one, the other, or a combination best accounts for the valley's effects. As perceptually oriented explanations imply the valley should be evident early in development, we investigated whether it presents in the responding of children (N = 80; ages 5 --10) to agents of varying human similarity. We found that, like adults, children were most averse to highly humanlike robots (relative to less humanlike robots and humans). But, unlike adults, children's aversion did not translate to avoidance. The findings thus indicate, consistent with perceptual explanations, that the valley effect manifests well before adulthood. However, further research is needed to understand the emergence of the valley's behavioral consequences.
Conversational robots that exhibit human-level abilities in physical and verbal conversation are widely used in human-robot interaction studies, along with the Wizard of Oz protocol. However, even with the protocol, manipulating the robot to move and talk is cognitively demanding. A preliminary study with a humanoid was conducted to observe difficulties wizards experienced in each of four subtasks: attention, decision, execution, and reflection.
Apprentice of Oz is a human-in-the-loop Wizard of Oz system designed to reduce the wizard's cognitive load in each subtask. Each task is co-performed by the wizard and the system. This paper describes the system design from the view of each subtask.
Women are underrepresented in robotics, and this may be partly due to the educational emphasis on mechanical applications rather than social applications of robotics. This study aimed to investigate whether teaching robotics using social robots increased girls' engagement compared to using more mechanical vex robots. 20 girls were recruited from school robotics classes. They were taught 30 minutes of VEX robotics and 30 minutes of social robotics in a counter-balanced order. Engagement was measured using questionnaires and observations. Results showed that girls were significantly more engaged in the social robot classes than the vex robot classes. This pilot study suggests a possible way to encourage more girls to study robotics.
There has recently been an explosion of work in the human-robot interaction (HRI) community on the use of mixed, augmented, and virtual reality. We present a novel conceptual framework to characterize and cluster work in this new area and identify gaps for future research. We begin by introducing the Plane of Interaction: a framework for characterizing interactive technologies in a 2D space informed by the Model-View-Controller design pattern. We then describe how Interactive Design Elements that contribute to the interactivity of a technology can be characterized within this space and present a taxonomy of mixed-reality interactive design elements. We then discuss how these elements may be rendered onto both reality- and virtuality-based environments using a variety of hardware devices and introduce the Reality-Virtuality Interaction Cube: a three-dimensional continuum representing the design space of interactive technologies formed by combining the Plane of Interaction with the Reality-Virtuality Continuum. Finally, we demonstrate the feasibility and utility of this framework by clustering and analyzing the set of papers presented at the 2018 VAM-HRI workshop.
Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.
The current study examines the pratfall effect and interpersonal impressions of how humans perceive robots that appear forgetful and apologetic regarding information given to them by a human. Results demonstrated that the non-forgetful robot was perceived higher for interpersonal impressions when compared to the forgetful and forgetful apologizing robots.
This research proposal aims to attract interest towards a future problem in the design of social robots. Persuasive technologies have demonstrated their effectiveness to negatively impact a user's behavior and generate addictions towards current social technologies. It could be the case that in the future similar persuasive technologies will be implemented in the design of social robots, leading to a recurrence of the negative impacts mentioned above. This research aims to investigate this phenomenon to prevent it and encourage the ethical design of social robots.
Smart Walkers are robotic devices commonly used to improve physical stability and sensory support for people with lower limb weakness or cognitive impairments. Even though such devices offer continuous monitoring during gait rehabilitation, empowering therapists to reduce efforts during gait therapies is an interesting research topic. In this context, a new strategy to remote-operate and perceive a walker-assisted gait therapy using a joystick is presented. Three operational modes were configured into a joystick to provide cognitive and physical interaction, such as no feedback mode, light feedback mode and force feedback mode. Results from a usability and acceptance questionnaire showed better participant's understating of light feedback mode, and higher effort perception under force feedback mode. This interaction strategy represents a potential joystick application as a monitoring and control interface in walker-assisted gait.
Social perceivers often view a human agent's norm-violating behavior as diagnostic of that person's mental states, while behaviors that conform to norms are viewed as less informative. We developed a series of stimulus videos depicting a DRC-HUBO robot engaging in norm-violating and norm-conforming behaviors. We explored the hypothesis that robots' norm-violating actions may invite social perceivers to increase their mental state attributions in a similar manner as they do in humans. Surprisingly, we found that norm-conforming behaviors appear to be at least as conducive as norm-violating behaviors, and perhaps even moreso, to mental state attribution to robotic agents.
The gradual transition towards the Kazakh language in the Republic of Kazakhstan raises the emergence of applying new technologies for learning the language. Considering the fact that the Kazakh language has dialectal forms, it is important to investigate how these language features would affect the interaction with the synthesized speech of a robot or a computer program. This paper presents a preliminary study exploring the effect of dialectal language on the human-robot interaction in an education-oriented environment. Participants were involved in the interaction with two different robots with pre-programmed language dialectal patterns - South and non-South, to learn new vocabulary. Findings show that there is a low significance in correlation, however, it is suggested that a small sample size led to the obtained results.
Social robots use gestures to express internal and affective states, but their interactive capabilities are hindered by relying on preprogrammed or hand-animated behaviors, which can be repetitive and predictable. We propose a method for automatically synthesizing affective robot movements given manually-generated examples. Our approach is based on techniques adapted from deep learning, specifically generative adversarial neural networks (GANs).
This paper introduces DataDrawingDroid, a wheel robot to visualize data and draw data-driven generative art onto a floor. In our user study, 24 participants watched videos of three types of data drawing. T-tests for the results on five-point Likert scales indicated that it attracted them and suggested the importance of balancing functionality and aesthetics.
In this study, we examined if humans adapt their performance to delays in robot's actions in a leader-follower interaction scenario. Participants were asked to "teach" a sequence of musical tones to the iCub robot. The robot repeated the sequence with decreasing delay between its own taps and taps performed by the participants. We observed that mean period of participants' tapping behavior was affected by the iCub's performance. This suggests that humans are sensitive to subtle parameters in robot's behavior and they adapt to them in leader-follower contexts.
We developed a self-report measurement, Moral Concern for Robots Scale (MCRS), which measures whether people believe that a robot has moral standing, deserves moral care, and merits protection. The results of an online survey (N = 200) confirmed the concurrent validity and predictive validity of the scale in the sense that the scale scores are successfully used to predict people's intentions for prosocial behaviors.
In general, people tend to place too much trust in robotic systems, also in emergency situations. Our study attempts to discover ways of reducing this overtrust, by adding vocal warnings of error from a robot that guides participants blindfolded through a maze. The results indicate that the tested vocal warnings have no effect in reducing overtrust, but we encourage further testing of similar warnings to fully explore its potential effects.
In this paper, we present the results of a recent user study that investigates if user perception of HRI in social contexts may be affected by changing the interaction modality with the robot. Leveraging on Robot Social Attribute Scale (RoSAS) survey and on a statistical analysis, our results show that, in some interaction modalities, a greater feeling of discomfort is felt by users interacting with the robot. Interestingly, results also show the influence of users' gender on the user perception.
We compared three different Graphical User Interfaces (GUI) that we have designed and implemented to enable human supervision of an unmanned ship.
Our findings indicate that a 3D GUI presented either on a screen or in a Virtual Reality (VR) setting provides several objective and subjective benefits compared to a Baseline GUI representing traditional tools.
There are many challenges when it comes to deploying robots remotely including lack of operator situation awareness and decreased trust. Here, we present a conversational agent embodied in a Furhat robot that can help with the deployment of such remote robots by facilitating teaming with varying levels of operator control.
Current methods for human robot interaction in the underwater domain seem antiquated in comparison to their terrestrial counterparts. Visual tags and custom built wired remotes are commonplace underwater, but such approaches have numerous drawbacks. Here we describe a method for human robot interaction underwater that borrows from the long standing history of diver communication using hand signals; a three stage approach for diver-robot communication using a series of neural networks.
Robot competition such as RoboCup@Home is one of the most effective ways to evaluate the performance of human-robot interaction; however, it takes a lot of costs for real robot maintenance and the practice of evaluation sessions. We have proposed a simulation software to evaluate human-robot interaction in daily life environment based on immersive virtual reality. In this paper, we design a task named 'human navigation' in which the evaluation requires a subjective impression by the users. Through a substantiative experiment, we confirmed that the proposed task and system reduced the cost for the practice of the competition.
In this paper we present additional results from a prior study of speech-based games to promote early literacy skills through child-robot interaction . The additional data and results support our original conclusion, that pronunciation analysis software can be an effective enabler of speech child-robot interactions. We also include a comparison of other pronunciation services, an updated version of the SpeechAce API and a new technology from Soapbox Labs. We reflect on some lessons learned and introduce a redesigned version of the game interaction called 'RhymeRacer' based on the results and observations from both data collections.
In this report, we investigate the factor of social presence of a robot by using an actual robot and comparing it with a CG robot studied in our previous study. A laboratory experiment is conducted using a simple jury decision-making task, where participants play the role of a jury and make decisions regarding the length of the sentence for a particular crime. During the task, a robot with expert knowledge provides suggestions regarding the length of the sentence based on other similar cases. Results show that participants who engaged with an actual robot showed higher conformity with the suggested length of a sentence compared to the participants who engaged with a CG robot presented through a computer monitor. This study shows results that are consistent with those of previous studies in that interacting with physically aware robots is more engaging and also shows its effects on decision-making in a court.
A key skill for social robots in the wild will be to understand the structure and dynamics of conversational groups in order to fluidly participate in them. Social scientists have long studied the rich complexity underlying such focused encounters, or F-formations. However, current state-of-the-art algorithms that robots might use to recognize F-formations are highly heuristic and quite brittle. In this report, we explore a data-driven approach to detect F-formations from sets of tracked human positions and orientations, trained and evaluated on two openly available human-only datasets and a small human-robot dataset that we collected. We also discuss the potential for further computational characterization of F-formations beyond simply detecting their occurrence.
This experiment was conducted to determine interpersonal impressions of two types of compliance-gaining strategies (warning and obligation) used to ask for the consideration of robot rights. Findings demonstrated that the strategy of obligation produced higher levels of petitioner perceived caring (a factor of source credibility) and higher perceptions of task attraction. Social attraction, competence, and character did not significantly differ by type of message strategy utilized.
To control smart watches available in the market, both hands should be used. A surface electromyography (SEMG)-based interface will enable controlling the watch with the hand where it is worn. However, fewer studies have been conducted on SEMG-interface for healthy subjects for smart watch applications. This study developed an algorithm to classify 3 -- 5 finger gestures from SEMG signals recorded on the upper wrist. Also, we compared the classification accuracies between intra-subject models and an inter-subject model. We concluded that when there is a low number of gestures, factory calibration for matching SEMGs to gestures is sufficient. However, when there are a high number of gestures, individual calibration is required just after purchasing the watch.
Dementia is a growing problem amongst elderly adults and the number of dementia patients is predicted to rise considerably in the coming years. While there is no cure for dementia, recent studies have suggested that exercise may have a positive effect on the cognitive function of dementia patients. We propose that a humanoid therapy robot is an effective tool for encouraging exercise in dementia patients. Such a robot will help address problems such as cost of care and shortage of healthcare workers. We have developed an interactive robotic system and conducted preliminary tests with a robot that encourages a user to engage in simple dance moves. The heart rate is used as feedback to decide which exercise move should be demonstrated. The results we have found are promising and we hope to continue this work via future studies.
Previous human-robot interaction (HRI) research has shown that trust, disclosure, and companionship may be influenced by a robot's verbal behavior. Measures used to interpret these key aspects of HRI commonly include surveys, observations, and user interviews. In this preliminary work, we aim to extend previous research by exploring the use of electroencephalography (EEG) to augment our understanding of participants' responses to vulnerable robot behaviors. We tested this method by obtaining EEG data from participants while they interacted with a robotic tutor. The robotic tutor was designed to exhibit high vulnerability (HV) or low vulnerability (LV) behaviors similar to a previous HRI study. Our preliminary results show that event-related potentials (ERPs) may provide insights into participants' early affective processing of vulnerable robot behaviors.
The purpose of this study was to test the strength of the machine heuristic; specifically when suspicion is primed in a message receiver concerning the veracity of information from a robot delivering news. Results demonstrate that low suspicion led to higher credibility evaluations, which consequently increased behavioral intentions. Findings are discussed in light of the MAIN model.
We advocate cooperative intelligence (CI) that achieves its goals in cooperating with other agents, particularly human beings, with limited resources but in complex and dynamic environments. CI is important because it delivers better performances in achieving a broad range of tasks; furthermore, cooperativeness is key to human intelligence, and the processes of cooperation can contribute to help people gain several life values. This paper discusses elements in CI and our research approach to CI. We identify the four aspects of CI: adaptive intelligence, collective intelligence, coordinative intelligence, and collaborative intelligence. We also take an approach that focuses on the implementation of coordinative intelligence in the form of personal partner agents (PPAs) and consider the design of our robotic research platform to physically realize PPAs.
The researchers in this study have developed a novel approach using mutual reinforcement learning (MRL) where both the robot and human act as empathetic individuals who function as reinforcement learning agents for each other to achieve a particular task over continuous communication and feedback. This shared model not only has a collective impact but improves human cognition and helps in building a successful human-robot relationship. In our current work, we compared our learned reinforcement model with a baseline non-reinforcement and random approach in a robotics domain to identify the significance and impact of MRL. MRL contributed to improved skill transfer, and the robot was able successfully to predict which reinforcement behaviors would be most valuable to its human partners.
Eye-gaze interaction is a common control mode for people with limited mobility of their hands. Mobile robotic telepresence systems are increasingly used to promote social interaction between geographically dispersed people. We are interested in how gaze interaction can be applied to such robotic systems, in order to provide new opportunities for people with physical challenges. However, few studies have implemented gaze-interaction into a telepresence robot and it is still unclear how gaze-interaction within these robotic systems impacts users and how to improve the systems. This paper introduces our research project, which takes a two-phase approach towards investigating a novel interaction-system we developed. Results of these two studies are discussed and future plans are described.
Many new and advanced technologies have been designed to develop the use of CGI and animation in the filmmaking industry. However, very little research exists on the possibility of using robotic technologies in stop-motion animation. This study examines the crucial effect of using robotic techniques in stop-motion animation Besides, it aims to develop robotic technologies that serve to reduce the cost, time, and risks of creating animation with the use of stop-motion art. We collect observations from expert's interviews to provide preliminary evidence of the possibilities of the human robot interaction in stops-motion. As media artists and researchers, we are looking for possible solutions to preserve stop-motion art. In a similar vein, this study will further explore the effectiveness of robotic technologies in different film-making dimensions according to the opinions of expert animators.
In this report, a new framework is proposed for inferring the user's personality traits based on their habitual behaviors during face-to-face human-robot interactions, aiming to improve the quality of human-robot interactions. The proposed framework enables the robot to extract the person's visual features such as gaze, head and body motion, and vocal features such as pitch, energy, and Mel-Frequency Cepstral Coefficient (MFCC) during the conversation that is lead by Robot posing a series of questions to each participant. The participants are expected to answer each of the questions with their habitual behaviors. Each participant's personality traits can be assessed with a questionnaire. Then, all data will be used to train the regression or classification model for inferring the user's personality traits.
Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.
With the need for geriatric care workers growing faster than can be met, the possibility of socially assistive robots filling this need has garnered increasing attention. This heightened interest in robots as social care workers, however, leads to concerns in detecting possible robot misbehavior. We propose a short questionnaire, based on current elder abuse screening tools, as a method to detect intrusion or misconfiguration in caregiver robots. We focus on misbehavior that can cause psychological or financial harm to the caregiver recipient. We discuss requirements, limitations, and future enhancements.
As an alternative to social co-robots, or collaborative robots, displacing workers in the workforce, social robots may, in the future, be used to augment workers with intellectual and developmental disabilities (IDD) and improve their work environments. We are developing AIDA, a co-robot to uplift and augment the abilities of workers who have IDD and work in a light manufacturing facility. This paper presents the design process for discovering the workers' needs and a description of our works-in-progress prototype social co-robot, AIDA, an artificially intelligent disability aide designed to function safely and securely alongside these workers.
To advance the research area of social robotics, it is important to understand the effect of different social cues on the perceived social agency to a robot. This paper evaluates three sets of verbal and nonverbal social cues (emotional intonation voice, facial expression and head movement) demonstrated by a social agent delivering several messages. A convenience sample of 18 participants interacted with SociBot, a robot that can demonstrate such cues, experienced in sequence seven sets of combinations of social cues. After each interaction, participants rated the robot's social agency (assessing its resemblance to a real person, and the extent to which they judged it to be like a living creature). As expected, adding social cues led to higher social agency judgments; especially facial expression was connected to higher social agency judgments.
This abstract presents a preliminary evaluation of the usability of a novel system for cognitive testing, which is based on the multimodal interfaces of the social robot "Pepper" and the IBM cloud AI "Watson". Thirty-six participants experienced the system without assistance and filled the System Usability Scale questionnaire. Results show that the usability of the system is highly reliable.
This work presents preliminary findings from a pilot study researching the complementary and similarity effect of personality of robots on children (N=22). We aim to advance research on robot personality by using a pairwise robot comparison study, a novel method in HRI. Our findings show that the attributed gender and the voice of the robots were the main reasons given for a preference. Based on the preliminary data we hypothesize that boys have a preference for the robot with the same gender as themselves.
Project Aquaticus is a human-robot teaming competition on the water involving autonomous surface vehicles and human operated motorized kayaks. Teams composed of both humans and robots share the same physical environment to play capture the flag. In this paper, we present results from seven competitions of our half-court (one participant versus one robot) game. We found that participants indicated more trust in more aggressive behaviors from robots.
Humans need to understand and trust the robots they are working with. We hypothesize that how a robot moves can impact people's perception and their trust. We present a methodology for a study to explore people's perception of a robot using the animation principle of slow in, slow out---to change the robot's velocity profile versus a robot moving using a linear velocity profile. Study participants will interact with the robot within a home context to complete a task while the robot moves around the house. The participants' perceptions of the robot will be recorded using the Godspeed Questionnaire. A pilot study shows that pilot participants notice the difference between the linear and the slow in, slow out velocity profiles, so the full experiment planned with participants will allow us to compare their perceptions based on the two observable behaviors.
In this research, the authors considered a method of generating an ecosystem of learning between classrooms using robots that children can edit themselves. A robot that allows children to choose faces and motions, as well as create user content and receive feedback on that content, was developed. This robot system was introduced and tested in an elementary school with 164 students. First, the 5th grades 18 children were taught how to set up the robot, and then they learned how to create usable content. From this, 19 robot contents were created. Next, the robot system was monitored for 2 months, and the teacher-created content was compared with the student-created content. It was found that the content created by the children was more interesting to the students than the teacher created content. Through an interview with the school librarian, the authors found that the classroom has enough learning ecosystem with elderly to younger children.
Since projection augmented reality (AR) robot can provide a lot of information through projector, it can be useful in museums and art galleries that need to provide information to the crowd. Therefore, it is necessary to continue to interact with people, and human-aware path planning is also needed. We prototyped projection AR mobile robot implemented human-aware path planning and wrote about future research direction.
Acoustic communication between humans and robots, via speech, relies on initially processing verbal utterances into text. In sophisticated scenarios the semantic meaning of the utterances is extracted so responses are given within context (e.g. conversational agents), in simpler scenarios these utterances can be mapped to direct responses or actionable tasks the agent can perform. On its basic form, speech are acoustic signals that encode linguistic messages, words, in correlation to a set of acoustic properties such as frequency range, harmonic-to-noise ratio and duration. Nonetheless, it is said that language is part of an individual's identity and an integral part of our social dynamics, thus developing and understanding a language has an intrinsic impact. We investigate a communication strategy in which unique, covert, tonal languages are dynamically generated for a given robot-human pair. We explore the application of such language in different scenarios - social and tactical, the learning efforts from the human, and discuss a near-distant future where robot agents can autonomously generate such tonal languages which may create stronger bonds and robot-human relationships.
Socially assistive robots (SARs) have the potential to improve working conditions of care workers, empower vulnerable people to retain independence, and even provide social companionship. Through a series of focus groups, this study explores how older adults and professional care workers in a Continued Care Retirement Community (CCRC) perceived a bespoke SAR platform known as Stevie. Using a mixed-method approach, it emerged that both care staff and residents developed a strong fondness for the robot, perceived it to be useful, and could envision a range of useful applications.
The BabeBay is a children companion robot which has the ability of real-time multimodal affective computing. Accurate and effective affective fusion computing makes BabeBay own adaptability and capability during interaction according to different children in different emotion. Furthermore, the corresponding cognitive computing and robots behavior can be enhanced to personalized companionship.
The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.
The uncanny valley phenomenon has be researched for the past 15 years, attempting to prove its validity with limited success. Researchers have been trying to recreate Masahiro Mori's hypothesised function  through a variety of experiments using images of real robots and images created from morphing humans and robots. Although some of these experiments have provided results supporting Mori's hypothesis, there is no solid confirmation of their legitimacy when it comes to real human-robot interaction. This paper examines the methods and subsequent results of studies to draw conclusions regarding the validity of experimental data into Mori's hypothesis. These conclusions lead us to propose an Augmented Reality(AR) experiment designed to verify the results of previous experimentation.
The use of social robots in interventions for persons with Neuro-Developmental Disorder (NDD) has been explored in several studies. This paper describes a plush social robot with an elephant appearance called ELE that acts as conversational companion and has been designed to promote NDD persons engagement persons during interventions. We also present the initial evaluation of ELE and the preliminary results in terms of visual attention improvement in a storytelling context.
This position paper discusses the question of incorporating roboethics into the roboticists' thinking about their research. On the one hand, there has been a growing recognition of the need to develop and advance the field of roboethics. On the other hand, for different reasons, a large part of the robotics community has still been reluctant to explicitly address ethical considerations in robotics research. We argue here that in order to facilitate and foster ethical reflection in roboticists' work, roboethics should be seen as a research puzzle. This implies studying rather than only applying specific ethical principles, as well as taking highly creative and pioneering approaches towards emerging ethical challenges.
This paper outlines the system design, capabilities and potential applications of an Augmented Reality (AR) framework developed for Robot Operating System (ROS) powered robots. The goal of this framework is to enable high-level human-robot collaboration and interaction. It allows the users to visualize the robot's state in intuitive modalities overlaid onto the real world and interact with AR objects as a means of communication with the robot. Thereby creating a shared environment in which humans and robots can interact and collaborate.
We present in this paper our work towards a new dynamic method of generating spatial referring expressions. While people are generally ambiguous in their description of locations, previous methods of artificial generation mostly considered non-ambiguous descriptions. However, to increase the naturalness of interaction and share workload in the communication, robots should be able to generate language in a more dynamic way. Our method initially produces ambiguous spatial referring expressions followed by dynamically generating repair statements. We built a classifier using data from 18 participants as they described locations to each other. We perform a preliminary analysis on this method using two further pilot studies.
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario x 4 trust dimension x 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
In this paper, we introduce a handy and affordable solution we are developing, for education of human-robot social interaction. The solution consists of a smart device-controlled robot with a 3D printed body, a cloud-based integrated development environment that provides intuitive programming and simulation of the robot, and embedded functions enabling intelligent, natural response of the robot. The outline and architecture of the proposed system are briefly explained.
This paper introduces our approach to building a robot with communication capability based on the two key features: stochastic neural dynamics and prediction error minimization. A preliminary experiment showed that the humanoid robot was able to imitate other's action by means of those key features. In addition, we found that some sorts of communicative patterns emerged between two robots in which the robots inferred the intention of another agent behind the sensory observation.
In this paper, we propose a ROS-based system to reconstruct the motion of human upper limb based on data collected with two Myo armbands in a hybrid manner. The inertial sensors' information are fused to reconstruct shoulder and elbow kinematics. Electromyographic (EMG) signals are used to estimate wrist kinematics, to fully capture the motion of the 5-DoF (degree of freedom) user's arm. The system shows a good pose estimation accuracy compared to the XSens suit with an average RMSE of 6.61° ± 3.31° and a R2 of 0.90 ± 0.07.
We are facilitating software development for mobile social robots that operate 'in the wild': that is, in real daily environments such as shopping malls. One fundamental difficulty in development is testing unfinished programs on a real robot and in the presence of people, because this process can take a significantly time-consuming. To ease the software development, we developed a simulator that allows the simulation of interactions among people, and interactions between people and the robot. With a user study, we analyzed how people's working process changed and evaluated the amount of time they saved in software development with the simulator.
This paper proposes a sensor substitution system that generates time-series sensor data via recurrent neural networks (RNN)-based sequence analysis to regress a virtual sensor sequence. More specifically, the proposed system estimates capacitive touch sensor signals by exploiting tiny motion and audio signals generated by touch. The proposed system was validated in a supervised learning task in which multiple sensors---specifically, an omnidirectional microphone, an accelerometer, a gyroscope, and a capacitive touch sensor---were employed. The multivariate temporal information of the input sequence was modelled using a gated recurrent unit (GRU). The experimental results obtained verified the feasibility of the proposed system and indicated that, compared to inertial signals (e.g., acceleration and angular velocities), audio signals are better for estimating the corresponding touch sensor sequences.
Possible applications of robots are growing in educational contexts, where they can support and enhance the traditional learning at any level, including kindergarten. However, the acceptance of such novel technology among the kids is not fully understood, especially for the youngest ones. In this abstract, we present an experiment that investigates the attitude of 52 preschooler children before and after the interaction with a human-oid robot in kindergarten setting. The main hypothesis is that ideas and prejudices can change after a controlled interaction with a physical robot. The study found that children exposed to the robot decrease their distress and positively change their attitude toward the technological device. The results suggest that an early, controlled exposure may facilitate future acceptance.
Hand guidance of robots has proven to be a useful tool both for programming trajectories and in kinesthetic teaching. However hand guidance is usually relegated to robots possessing joint-torque sensors (JTS). Here we propose to extend hand guidance to robots lacking those sensors through the use of an Augmented Reality (AR) device, namely Microsoft's Hololens. Augmented reality devices have been envisioned as a helpful addition to ease both robot programming and increase situational awareness of humans working in close proximity to robots. We reference the robot by using a registration algorithm to match a robot model to the spatial mesh. The in-built hand tracking capabilities are then used to calculate the position of the hands relative to the robot. By decomposing the hand movements into orthogonal rotations we achieve a completely sensorless hand guidance without any need to build a dynamic model of the robot itself. We did the first tests our approach on a commonly used industrial manipulator, the KUKA KR-5.
This work documents a playful human-robot interaction, in the form of a game of charades, through which a humanoid robot is able to learn how to produce and recognize gestures by interacting with human participants. We describe an extensive dataset of gesture recordings, which can be used for future research into gestures, specifically for human-robot interaction applications.
Drawing the attention of passersby is a basic task of a social robot to initiate an interaction in a public environment (e.g., shopping malls, museums or hospitals). Humans use several social cues, both verbal and nonverbal, to draw the attention of others. In this study, we investigate whether similar behaviors can also be effectively used by a social robot for drawing attention. To this end, we setup a humanoid robot (Pepper) to act as a welcoming robot at the entrance of a university building. The behaviors selected for Pepper include one or a combination of behavioral modalities (i.e., a waving gesture, utterance and movement). These behaviors are triggered automatically using the output of people detection software which tracks passersby and monitors their head keypoints (nose, eyes, and ears). The reactions of people toward Pepper are observed and recorded by means of an observation sheet.
For several weeks, we deployed Pepper at the entrance with the aim of wearing off the novelty effect. In our final study, we collected data from several hundreds of passersby (N=364) and conducted post-interviews with randomly selected ones (N=28). Passersby noticed Pepper at the entrance and clearly recognized its role as a welcoming robot. In addition, Pepper was able to draw more attention when displaying a combination of behavioral modalities. However, passersby did not recall the robot utterance as they, for example, were unable to reproduce it or mistakenly claimed that the robot said something when it was only waving.
This research used a questionnaire survey to examine the positive and negative opinions of Japanese university students about living with robots. The results show that the effect of educational background on the hope of living with a robot is statistically significant, that gender affects negative attitudes toward the social influence of robots, and that negative correlation between the hope of living with a robot and negative attitudes toward emotional interaction with robots is statistically significant. An exploratory qualitative classification reveals that most Japanese undergraduates hold the negative opinion that they have no need to live with robots because they are not alone.
AVEC is a power assist wheelchair system which has an intuitive interface the elderly population can use more easily. In the process of the iterative design cycle, we conducted field interviews with target users including contextual inquiry. The result data implies that AVEC improved usability of the wheelchair and enriched communication between passengers and caregivers.
Previous work in HHI and HRI demonstrates the impact of gaze on the human interaction experience (IE). In this paper, we discuss an experimental design that should enable measuring the influence of the gaze aversion ratio (GAR) on the users' IE with a social robot. We assume gaze behavior studied in HHI to be restrictive for autonomous social robots, limiting the time available to robots for perception tasks besides HRI. Our goal is to determine if a deviation from human gaze behavior is accepted by users in HRI. With an in-between experimental design we evaluate the effect of varied GAR on IE and behavioral measures. A pilot study with 9 participants suggests that averting the gaze for longer time spans is favorable.
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a 'flawless' human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of realtime activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.
A series of studies were conducted as an exploratory survey to examine the possible roles of a robot as a partner in healthcare. The results show that Japanese people are willing to use a humanoid robot as an exercise partner in a variety of usage scenarios. In particular, a walking robot was found to be useful for socially anxious individuals, in that it can serve as a safe partner that is not evaluates them.
We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.
Telepresence takes place when a user is afforded with the experience of being in a remote environment or virtual world, through the use of immersive technologies. A humanoid robot and a control apparatus that correlates the operator's movements whilst providing sufficient sensory feedback, encompass such immersive technologies. This paper considers the control mechanisms that afford telepresence, the requirements for continuous or extended telepresence control, and the health implications of engaging in complex time-constrained tasks. We present Telesuit - a full-body telepresence control system for operating a humanoid telepresence robot. The suit is part of a broader system that considers the constraints of controlling a dexterous bimanual robotic torso, and the need for modular hardware and software that allows for high-fidelity immersiveness. It incorporates a health-monitoring system that collects information such as respiratory effort, galvanic skin response, and heart rate. This information is consequently leveraged by the platform to adjust the telepresence experience and apply control modalities for autonomy. Furthermore, the design of the Telesuit garment considers both functionality and aesthetics.
To be accepted in our everyday life and to be valuable interaction partners, robots should be able to display emotional and empathic behaviors. That is why there has been a great focus on developing empathy in robots in recent years. However, there is no consensus on how to measure how much a robot is considered to be empathic. In this context, we decided to construct a questionnaire which specifically measures the perception of a robot's empathy in human-robot interaction (HRI). Therefore we conducted pretests to generate items. These were validated by experts and will be further validated in an experimental setting.
The current attention on quality monitoring instruments for hospitalized patients imposes a high data registration workload on nurses. The focus of our research was to investigate whether a social robot is able to take over some of this data collection by administering questionnaires autonomously. We performed an exploratory design experiment on the internal medicine ward of the Franciscus Gasthuis & Vlietland hospital. 35 patients (mean age 64.1±17.7, 20 female) participated in the study. We used the social robot Pepper to conduct five questionnaires on medical history, defecation, pain, memory and sleep. Patients and nurses found the robot reasonably acceptable in this role. Further research is needed to address concerns and optimize the nurse-robot task division.
In presentation, presenters are required to conduct non-verbal behavior involving face direction and gesture, which is so important for promoting audience's understanding. However, it is not simple for the presenters to use appropriate non-verbal behavior depending on the presentation contexts. In order to address this issue, this paper proposes a robot presentation system that allows presenters to reflect on their presentation through authoring the presentation scenario used by the robot. Features of the proposed system are that the presenters can easily and quickly author and modify their presentation, and that they can become aware of points to be modified. In addition, this paper reports a case study using the system with six participants, whose purpose was to compare the proposed system with the convention system in terms of complication for authoring the scenario. The results suggest that our system allows presenters to easily and quickly modify the presentation.
Robotic rollators (walkers with motor) require appropriate assist performance to improve the quality of life of the elderly and disabled. In this pilot study, the performance of commercial robotic rollators was evaluated by analyzing the effect of load condition on muscle activities and knee and hip joint angle when walking on a slope. Results showed that, the highest muscle activity occurred under a load of 5 kg but there was no difference in joint angle according to load condition. In future studies, we plan to test the performance with many subjects. In addition, we will perform motion analysis and muscle activity analysis according to changes in assist, brake, and speed levels.
Current manufacturing applications are subject to constant changes in production orders for their robotic systems to adapt to the dynamic nature of the market. Hence, re-programming robots needs to be a fast, easy and effective process. In this demonstration, we present an augmented reality (AR) interface using HoloLens. Our interface provides an intuitive platform to re-program a robotic packing application through simple hand gestures and the information gathered by the HoloLens' spatial mapping functionality.
We demonstrate a system to control robots in the users proximity with pointing gestures---a natural device that people use all the time to communicate with each other. Our setup consists of a miniature quadrotor Crazyflie 2.0, a wearable inertial measurement unit MetaWearR+ mounted on the user's wrist, and a laptop as the ground control station. The video of this demo is available at https://youtu.be/yafy-HZMk_U .
It is an important functional behavior for humanoid robots to have face-to-face contact with humans. We predict future face position to achieve natural behavior that is similar to the communication between people. Robots gaze at a prediction point for reducing mechanical delay. The proposed system for robots to have face-to-face contact can reduce delay.
We demonstrate an open, extensible framework for enabling faster development and study of physically situated interactive systems. The framework provides a programming model for parallel coordinated computation centered on temporal streams of data, a set of tools for data visualization and processing, and an open ecosystem of components. The demonstration showcases an interaction toolkit of components for systems that interact with people via natural language in the open world.
The Social Robots in Therapy workshop series aims at advancing research topics related to the use of robots in the contexts of Social Care and Robot-Assisted Therapy (RAT). Robots in social care and therapy have been a long time promise in HRI as they have the opportunity to improve patients life significantly. Multiple challenges have to be addressed for this, such as building platforms that work in proximity with patients, therapists and health-care professionals; understanding user needs; developing adaptive and autonomous robot interactions; and addressing ethical questions regarding the use of robots with a vulnerable population.
The full-day workshop follows last year's edition which centered on how social robots can improve health-care interventions, how increasing the degree of autonomy of the robots might affect therapies, and how to overcome the ethical challenges inherent to the use of robot assisted technologies. This 2nd edition of the workshop will be focused on the importance of equipping social robots with socio-emotional intelligence and the ability to perform meaningful and personalized interactions.
This workshop aims to bring together researchers and industry experts in the fields of Human-Robot Interaction, Machine Learning and Robots in Health and Social Care. It will be an opportunity for all to share and discuss ideas, strategies and findings to guide the design and development of robot-assisted systems for therapy and social care implementations that can provide personalize, natural, engaging and autonomous interactions with patients (and health-care providers).
The 2nd International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) will bring together HRI, Robotics, and Mixed Reality researchers to identify challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of new augmented reality interfaces that mediate communication between humans and robots, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. VAM-HRI was held for the first time at HRI 2018, where it served as the first workshop of its kind at an academic AI or Robotics conference, and served as a timely call to arms to the academic community in response to the growing promise of this emerging field. VAM-HRI 2019 will follow on the success of VAM-HRI 2018, and present new opportunities for expanding this nascent research community.
Although both HRI and Communication Science often trace their origins to the transdisciplinary cybernetics of the 20th century, they have since developed in relative isolation, with scant scholarly exchange concerning the similarities and differences in their assumptions, insights, and approaches. The purpose of this half-day workshop is to explore the ways in which traditional communication theory and Human-Machine Communication theory from the discipline of Communication Science can be applied and be utilized in HRI studies.
Expressivity - the use of multiple, non-verbal, modalities to convey or augment the communication of internal states and intentions - is a core component of human social interactions. Studying expressivity in contexts of artificial agents has led to explicit considerations of how robots can leverage these abilities in sustained social interactions. Research on this covers aspects such as animation, robot design, mechanics, as well as cognitive science and developmental psychology. This workshop provides a forum for scientists from diverse disciplines to come together and advance the state of the art in developing expressive robots. Participants will discuss points of methodological opportunities and limitations, to develop a shared vision for next steps in expressive social robots.
Nowadays, off-the-shelf social robots are used more frequently by the HRI community to research social interactions with different types of users across a range of domains such as education, retail, health care, public places and other domains. Everyone doing HRI research with end-users is invited to submit a case study to our workshop. We are particularly interested in case studies where things did not go as planned. Case studies describing research in the lab or in the wild are both welcome. Examples of unplanned experiences could include, but are not limited to, unexpected responses from the user, issues with the experimental setup or simply having challenges with transferring theory to the real world. In this workshop, we focus on off-the-shelf robots. In order to generalize and compare differences across multiple HRI domains and create common solutions, we will provide a template for your case study. We are interested in learning how such unexpected HRI results can be reported. In the workshop, we will discuss and study how failures are reported and be inspired to create a list of good ways to report failures, which can hopefully be inspiring for the HRI community.
Exercising is strongly recommended for prevention and treatment of pathologies with high prevalence such as cardiovascular diseases, cancer, and diabetes. The World Health Organization (WHO) states that insufficient physical activity is one of the leading risk factors for death worldwide. The decrease of physical activity in our society is not just an individual problem but it is influenced by a variety of environmental and social factors. Hence, it is important to target this issue from a multi-perspective and interdisciplinary point of view. This full-day workshop will offer a forum for researchers from a variety of backgrounds to discuss the potentials and limitations of using social robots to promote physical activity.
Robots are being increasingly developed as social actors, entering public and personal spaces such as airports, shopping malls, care centres, and even homes, and using human or animal-like social techniques to work with people. Some even aim to engineer social situations, or are designed specifically for an emotional response (e.g., comforting a person). However, if we consider these robots as social interventions, then it is important to recognize that the robots design - its behavior, its application, its appearance, even its marketing image - will have an impact on the society and in the spaces it enters. While in some cases this may be a positive effect, social robots can also contribute negatively, e.g., reinforcing gender stereotypes or promoting ageist views. This full-day workshop aims to offer a forum for Human-Robot Interaction (HRI) researchers to explore this issue, and to work toward potential opportunities for the field. Ultimately, we want to promote robots for social good that can contribute to positive social changes for socio-political issues (e.g., ageism, feminism, homelessness, environmental issues). The political aspects of technologies have long been scrutinized in related areas such as Science and Technology Studies (STS) and Human-Computer Interaction (HCI). In particular, critical design explicitly targets the design of technologies that can contribute to our understanding of how technology can impact society. This workshop aims to strengthen this discussion in the HRI community, with the goal of working toward initial recommendations for how HRI designers can include elements of critical design in their work.
This workshop is dedicated to discuss and explore the specific interdisciplinary aspects of Bodily human robot interaction, and establish a common ground for this area as a recognized and continued research topic. Bodily interaction with robots and robotic devices is partially established in niche applications such as exoskeletons, assistive devices and advanced machines for physical training where bodily interaction is the application. Bodily interaction is expected to develop a broader role in human robot interaction, for instance in manufacturing and in social and entertainment robotics. The direct exchange of force and motion in bodily interaction create a range of engineering challenges, but also entwine engineering directly with topics that traditionally reside in the realm of health and humanistic science, from biomechanics to human's social responses to the prompting and responses of physical interaction.
For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.
Robotic rescuers digging through rubble, fire-fighting drones flying over populated areas, robotic servers pouring hot coffee for you, and a nursing robot checking your vitals are all examples of current or near-future situations where humans and robots are expected to interact in a dangerous situation. Dangerous HRI is an as-yet understudied area of the field. We define dangerous HRI as situations where humans experience some amount of risk of bodily harm while interacting with robots. This interaction could take many forms, such as a bystander (e.g. when an autonomous car waits at a crossing for a pedestrian), as a recipient of robotic assistance (rescue robots), or as a teammate (like an autonomous robot working with a SWAT team). To facilitate better study of this area, the Dangerous HRI workshop brings together researchers who perform experiments with some risk of bodily harm to participants and discuss strategies for mitigating this risk while still maintaining validity of the experiment. This workshop does not aim to tackle the general problem of human safety around robots, but instead focused on guidelines for and experience from experimenters.
The HRI community is working to develop interactive robots for a wide variety of pro-social tasks and ideals. As such we naturally focus on the positive side of HRI including how robots and humans may collaborate and the benefits of doing so. This workshop, in contrast, will focus on the dark side of HRI with the goal of identifying, understanding and guarding against the potential negative consequences of interactive robots. The primary objective of the workshop is to articulate and discuss the most pertinent ethical issues facing the HRI community and to develop a set of common community guidelines.
HRI researchers have explored how people behave toward technology agents, advancing the concept that people can attain "closeness" with technology itself in addition to a living social partner. Yet the topic of closeness with robots has not been fully explored or organized into a discrete area of study. This seems particularly important to the design of robots that are expressive, to the implementation of technologies that use new social signal processing or reciprocal social touch, and to the study of how people respond to robots. This half-day workshop is a forum to discuss the future of "closeness" with robots, conversational agents, autonomous vehicles, Internet of Things devices and other technologies that act as social partners---designs, applications, responses and societal concerns.
The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year's half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to design, develop and test new robotics systems that help learners. This 5th edition of the workshop will be dealing in particular on the potential use of robots for adaptive learning. Since the past few years, inclusive education have been a key policy in a number of countries, aiming to provide equal changes and common ground to all. In this workshop, we aim to discuss strategies to design robotics system able to adapt to the learners' abilities, to provide assistance and to demonstrate long-term learning effects.
With recent strong interests in flexible displays and wearable devices allowing them to be mechanically robust against deformation, sensors and actuators for human-machine interfaces are required to be soft to be embedded into flexible mechanisms. We introduce recent approaches in soft and flexible sensors and actuators, and discuss current issues to be solved in the topics. An open discussion will take place on the future of these types of sensors and actuators for human-robot interaction systems.
Verified and validated test methods, being necessary to measure the performance of complex systems, are important tools for driving innovation, benchmarking and improving performance, and establishing trust in collaborative human-robot teams. This full-day workshop aims to explore the metrology necessary for repeatably and independently assessing the collaborative performance of robotic systems in real-world human-robot interaction (HRI) scenarios. This workshop aims to bridge the gaps between the theory and applications of HRI in industry, accelerating the adoption of cutting edge technologies as the industry state-of-practice. The interest in collaborative HRI is evident in the current market as well as standards efforts toward manufacturing, social, medical, and service robot solutions. Though these domains have been considered separate for many years, recent technological and scientific advancements show that, while their applications may differ, the underlying principles of HRI performance impact each identically. As such, this workshop seeks to identify test methods and metrics for the holistic assessment and assurance of collaborative HRI performance. The focus is on identifying the key performance indicators of these seemingly disparate sectors, and additionally to establish a community based on the principles of transparency, repeatability, & establishing trust in the assessment of collaborative HRI. The goal is to aid in the advancement of HRI technologies through the development of experimental scenarios, protocols, test methods, & metrics for the verification and validation of interaction solutions and interface designs.
Service robots with social intelligence are starting to be integrated into our everyday lives. The robots are intended to help improve aspects of quality of life as well as improve efficiency. We are organizing an exciting workshop at HRI 2019 that is oriented towards sharing the ideas amongst participants with diverse backgrounds ranging from Human-Robot Interaction design, social intelligence, decision making, social psychology and aspects and robotic social skills. The purpose of this workshop is to explore how social robots can interact with humans socially and facilitate the integration of social robots into our daily lives. This workshop focuses on three social aspects of human-robot interaction: (1) technical implementation of social robots and products, (2) form, function and behavior, and (3) human behavior and expectations as a means to understand the social aspects of interacting with these robots and products.
The success of rising service robots will rely largely on their ability to persuade people to use their services. Simple scenarios in which a robot conveys information to a human could be enhanced given a deeper understanding of persuasion in the context of human robot interaction. These robots can further increase their utility with moving around and being responsive to people. Robot furniture is an upcoming area of social robotics where the furniture itself acts as the minimal social robot. These robots have already shown success in interacting via non-verbal behaviors, however, previous work has seldom considered the persuasive capability of their behavior except in  .
This PhD project aims at investigating how a social robot can adapt its behaviours to the group members in order to achieve more positive group dynamics, which we identify as group intelligence. This goal is supported by our previous work, which contains relevant data and insightful results to the understanding of group interactions between humans and robots. Finally, we examine and discuss the future work we have planned and what are the contributions to Human-Robot Interaction (HRI) field.
There will always be occasions where robots require assistance from humans. Understanding what motivates people to help a robot, and what effect this interaction has on an individual will be essential in successfully integrating robots into our society. Emotions are important in motivating prosocial behavior between people, and therefore may also play a large role in human-robot interaction. This research explores the role of emotion in motivating people to help a robot and some of the ethical issues that arise as a result, with the ultimate aim of developing suitable methods for robots to interact with humans to acquire assistance.
Children have been documented extending prosocial ideals and moral worth towards robots. However, there is also emerging evidence that children will behave violently towards robots. Given expectations that robots will soon become common fixtures in classrooms, and that children can behave violently towards robots, it is important that we identify how others might appraise the victims (robots) and perpetrators (children) of these scenarios. It is critical that we understand whether children will think it is okay to behave immorally towards a robot, or if they will be deterred by these acts of abuse. Additionally, we must determine how this will influence children's learning decisions. The proposed study aims to address these questions by creating a novel paradigm where children are asked to rate robots and humans who have been involved in a transgression, where the human behaves anti-socially towards the robot. This study will also aim to understand how these appraisals will influence childrens learning decisions in a forced choice, selective learning task.
Robots deployed in human environments will inevitably encounter unmodeled scenarios which are likely to result in execution failures. To address this issue, we would like to allow co-present naive users to correct and improve the robot's behavior as these edge cases are encountered over time.
As robots become more capable, they will become increasingly useful in a widening variety of contexts and applications. t Non-roboticists in these diverse contexts will need to interact with their new robotic colleagues to facilitate productive human-robot teaming and comfortable coexistence in social environments. Natural language provides a medium for these interactions that will allow direct and fluid communication between robots and nearly all humans, without requiring specialized protocols or hardware. Indeed, many researchers have been actively investigating the problems of natural language understanding and generation in robots for some time --.
The objective of this study is to investigate the effect of socio-relational context and robot's tactility on the sense of social presence and user satisfaction. We executed a 2(sociorelational context: an intimate remote sender vs. a non-intimate remote sender) x 2(tactility: anthropomorphic tactility vs. non-anthropomorphic tactility) within-participants experiment (N=24). As a result, participants felt a stronger sense of a remote sender's presence when the remote sender was an intimate person than a non-intimate person, and felt a greater sense of a remote sender's presence when a robot had an anthropomorphic tactility than when a robot had a non-anthropomorphic tactility. In addition, participants preferred a telepresence robot with an anthropomorphic tactility in the context that they interacted with an intimate person, whereas a telepresence robot with a non-anthropomorphic tactility was preferred in the context that they interact with a non-intimate person in robot-mediated communication.
Gaze is an intuitive and effective human-robot interaction (HRI) modality as it communicates the attention of a robot. Implementations of different gaze mechanisms often include plausible human gaze timings and are evaluated in isolated single-task settings. However, humanoid social robots will be deployed in complex social situations. During a conversation a robot therefore might need to avert its gaze away from the human to demonstrate awareness of other events. Human-based gaze timings could be too restrictive to accomplish this task. Participants' levels of comfort, received attention, and behavioral engagement will be measured during a systematic variation on the gaze focus ratio in a conversation. The robot will alternatingly focus on the interviewer and objects at the back of the room.
The findings will explore whether it is possible to design robot interactions that do not adhere to predetermined human-based parameters and compare interaction quality measures along the varied gaze focus split to actual HHI timing distributions.
Autonomous manipulation has the potential to improve the quality of life of many by assisting in routine household tasks such as cooking, cleaning, and organizing. However, for safe, dependable, and effective operation alongside humans, both the robot and the human must have an accurate and reliable assessment of the robot's proficiency at completing the relevant tasks. Such an assessment helps to ensure that the robot does not engage in tasks that it cannot handle and instead engages in tasks that are well-aligned with the robot's abilities. This proposal thus investigates how a robot can actively assess both its proficiency and its confidence in that assessment through appropriate measures of uncertainty that can be efficiently and effectively communicated to a human. The experiments examine how a user's trust and subsequent use of a robot vary as a result of the robot's self-assessment of proficiency.
We live in an aging society; social-physical human-robot interaction has the potential to keep our older adults healthy by motivating them to exercise. After summarizing prior work, this paper proposes a tool that can be used to design exercise and therapy interactions to be performed by an upper-body humanoid robot. The interaction design tool comprises a teleoperation system that transmits the operator's arm motions, head motions and facial expression along with an interface to monitor and assess the motion of the user interacting with the robot. We plan to use this platform to create dynamic and intuitive exercise interactions.
The development of human-robot systems able to leverage the strengths of both humans and their robotic counterparts has been greatly sought after because of the foreseen, broad-ranging impact across industry and research. We believe the true potential of these systems cannot be reached unless the robot is able to act with a high level of autonomy, reducing the burden of manual tasking or teleoperation. To achieve this level of autonomy, robots must be able to work fluidly with its human partners, inferring their needs without explicit commands. This inference requires the robot to be able to detect and classify the heterogeneity of its partners. We propose a framework for learning from heterogeneous demonstration based upon Bayesian inference and evaluate a suite of approaches on a real-world dataset of gameplay from StarCraft II. This evaluation provides evidence that our Bayesian approach can outperform conventional methods by up to 12.8%.
Robots must exercise socially appropriate behavior when interacting with humans. How can we assist interaction designers to embed socially appropriate and avoid socially inappropriate behavior within human-robot interactions? We propose a multi-faceted interaction-design approach that intersects human-robot interaction and formal methods to help us achieve this goal. At the lowest level, designers create interactions from scratch and receive feedback from formal verification, while higher levels involve automated synthesis and repair of designs. In this extended abstract, we discuss past, present, and future work within each level of our design approach.
We conducted a study on whether dogs (Canis familiaris) perceive robots as agents, using the classic pointing paradigm in animal cognition research. While few studies to date have explored the pointing paradigm with robots, an initial study did not suggest dogs understood non-humanoid robot pointing. In this study, we tested 20 dogs with the humanoid robot Nao. Our results did not suggest that dogs understand humanoid robot pointing. We are currently working on revising the design and will conduct more experiments.
Priming is the influence of external stimuli on a person's behavior and thoughts, where the effect is related to some quality of that stimuli. In this work, we explore three methods of priming teleoperator's expectations about robot capability and investigate how this may impact driving behavior and perceptions of the robot. We tested priming impressions of robot ability by the stiffness of the robot controller, additionally describing the changes in stiffness verbally to operators and simply describing the robots' abilities on paper and verbally. Our results found that all priming methods affected the perception of the robot including its speed, weight, or overall safety. Further, we confirmed that some priming methods lowered operator collisions by over 40%. Thus, interface and product designers should consider priming as a tool to leverage to improve operator performance and perception of the robot.
Interactions with social robots in public and private spaces are becoming more and more common and varied. As this trend continues, it is important to understand how a robot's embodiment influences its ability to calibrate trust and comfort with its users and behave in accordance with social norms. This is especially true when one social intelligence embodies multiple physical robots (re-embodiment). We have conducted two studies---one quantitative and and one qualitative---which shed light on the way robots should be embodied and re-embodied by intelligences during different types of social interactions. This paper outlines our previous work on elucidating the role of embodiment in social interactions and experimenting with re-embodiment as a design paradigm, and it describes the directions in which we plan to take this research in the near future.
Imagine a learning scenario between two humans: a teacher demonstrating how to play a new musical instrument or a craftsman teaching a new skill like pottery or knitting to a novice. Even though learning a skill has a learning curve to get the nuances of the technique right, some basic social principles are followed between the teacher and the student to make the learning process eventually succeed. There are several assumptions or social priors in this communication for teaching: mutual eye contact to draw attention to instructions, following the gaze of the teacher to understand the skill, the teacher following the student's gaze during imitation to give feedback, the teacher demonstrating by pointing towards something she is going to approach or manipulate and verbal interruptions or corrections during the learning process , . In prior research, verbal and non-verbal social cues such as eye gaze and gestures have been shown to make human-human interactions seamless and augment verbal, collaborative behavior , . They serve as an indicator of engagement, interest and attention when people interact face-to-face with one another , .
A commonly used argument for using robots in interventions for autistic children is that robots can be very predictable. Even though robot behaviour can be designed to be perceived as predictable, a degree of perceived unpredictability is unavoidable and may sometimes be desirable to some extent. To balance the robot's predictability for autistic children, we will need to gain a better understanding of what factors influence the perceived (un)predictability of the robot, how those factors can be taken into account through the design of the interaction, and how they influence the autistic child-robot interaction. In our work, we look at a specific type of predictability and define it as "the ability to quickly and accurately predict the robot's future actions". Initial results show that seeing the cause of a robot's responsive actions influences to what extent it is perceived as being unpredictable and its competence. In future work, we will investigate the effects of the variability of the robot's behaviour on the perceived predictability of a robot for both typically developing and autistic individuals.
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others . Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems , , , . They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system , . Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before , , . This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes . As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust . Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
Social influence refers to an individual's attitudes and/or behaviours being influenced by others, whether implicit or explicit, such that persuasion and compliance gaining are instances of social influence  . In human-human interaction (HHI), the desire to understand compliance and maximise social influence for persuasion has led to the development of theory and resulting strategies one can use in an attempt to leverage social influence, e.g. Cialdini's 'Weapons of Influence' . Whilst a number of social human-robot interaction (HRI) studies have investigated the impact of different robot behaviours in compliance gaining/persuasion (e.g. --); established strategies for maximising this are yet to emerge, and it is unclear to what extent theories and strategies from HHI might apply.
We have designed "Pomelo", an interactive robot that teaches children basic algorithmic skills and enhances classroom collaboration through games. Pomelo looks like a friendly dog with a screen displaying its eyes and cooperates with the students through vision and speech. Pomelo becomes a part of the class instead of merely being a second teacher by interacting with children in the same way children interact with each other, like "Robovie" . Ultimately, Pomelo will encourage kids to use technology as an interactive learning tool instead of a form of addictive entertainment while creating a more cooperative and social classroom environment.
Fast-paced contemporary life full of planned interaction usually makes people miss out on wonderful moments. We here present BubbleBot, a speculative robot designed to support serendipity of interactions in public space. After observations in public spaces and embodied design workshops, we have designed BubbleBot to be a peripheral public-space robot, bursting bubbles at passersby to invite serendipitous interactions. BubbleBot is a speculative robot to create magical moments among people with minimal peripheral social interaction. With this project, we aim at generating a conversation about the future roles and interaction paradigms of robots in public space.
Lumigami is an interactive lamp that can be adapted by the user in order to create different room atmospheres by changing light conditions according to the user's necessities. It creates a responsive communication between man and machine, by letting the user manipulate the mechanical dimmer with intuitive hand actions, opening up different interactive experiences for light spaces. It expands the style of modern geometric lamps by incorporating traditional origami and interactive design, to explore gestures as a new way to relate with day to day objects.
Inspired by prior work with robots that physically display positive emotion (e.g., ), we were interested to see how people might interact with a robot capable of communicating cues of negative affect such as anger. Based in particular on , we have prototyped an anti-social, zoomorphic robot equipped with a spike mechanism to nonverbally communicate anger. The robot's embodiment involves a simple dome-like morphology with a ring of inflatable spikes wrapped around its circumference. Ultrasonic sensors engage the robot's antisocial cuing (e.g., "spiking" when a person comes too close). To evaluate people's perceptions of the robot and the impact of the spike mechanism on their behavior, we plan to deploy the robot in social settings where it would be inappropriate for a person to approach (e.g., in front of a door with a "do not disturb" sign). We expect that exploration of robot antisociality, in addition to prosociality, will help inform the design of more socially complex human-robot interactions.
We designed "AVEC", a wheelchair for old caregivers that can be intuitively controlled with minimum force and enhance communication with passengers. We set three design objectives through objective data and hands-on experiences. In order to achieve them, we have developed three prototypes based on the process of the iterative design cycle. After verifying the usability and testing various types of structure, the final prototype was designed to enter our daily lives through intuitive power assistance, enhancing communication between users, and user-centered design.
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally "Hands-Free". People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.