Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.
We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust.
The attribution of human-like characteristics onto humanoid robots has become a common practice in Human-Robot Interaction by designers and users alike. Robot gendering, the attribution of gender onto a robotic platform via voice, name, physique, or other features is a prevalent technique used to increase aspects of user acceptance of robots. One important factor relating to acceptance is user trust. As robots continue to integrate themselves into common societal roles, it will be critical to evaluate user trust in the robot's ability to perform its job. This paper examines the relationship among occupational gender-roles, user trust and gendered design features of humanoid robots. Results from the study indicate that there was no significant difference in the perception of trust in the robot's competency when considering the gender of the robot. This expands the findings found in prior efforts that suggest performance-based factors have larger influences on user trust than the robot's gender characteristics. In fact, our study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender. As such, gendering in robot design should be considered critically in the context of the application by designers. Such precautions would reduce the potential for robotic technologies to perpetuate societal gender stereotypes.
As autonomous robots move towards ubiquity, the need for robots to make decisions under risk that are trustworthy becomes increasingly significant; both to aid acceptance and to fully utilise their autonomous capabilities. We propose that incorporating a human approach to risk assessment into a robot's decision making process will increase user trust. This work investigates four robotic approaches to risk and, through a user study, explores the levels of trust placed in each. These approaches are: risk averse, risk seeking, risk neutral and a human approach to risk. Risk is artificially stimulated through performance-based compensation, in line with previous studies. The study was conducted in a virtual nuclear environment created using the Unity games engine. Forty participants were asked to complete a robot supervision task, in which they observed a robot making risk based decisions and were able to question the robot, question the robot further and ultimately accept or alter the robot's decision. It is shown that a robot that is risk seeking is significantly less trusted than a risk averse robot, a risk neutral robot and a robot utilising human approach to risk. There was found to be no significant difference between the levels of trust placed in the risk averse, risk neutral and human approach to risk. It is also found that the level to which participants question a robot's decisions does not form an accurate measure of trust. The results suggest that when designing a robot that must make risk based decisions during teleoperation in a hazardous environment, an engineer should avoid a risk seeking robot. However, that same engineer may choose whichever of the remaining risk profiles best suits the implementation, with knowledge that the trust in their system is unlikely to be significantly affected.
This paper examines how people's trust and dependence on robot teammates providing decision support varies as a function of different attributes of the robot, such as perceived anthropomorphism, type of support provided by the robot, and its physical presence. We conduct a mixed-design user study with multiple robots to investigate trust, inappropriate reliance, and compliance measures in the context of a time-constrained game. We also examine how the effect of human accountability addresses errors due to over-compliance in the context of human robot interaction (HRI). This study is novel as it involves examining multiple attributes at once, thus enabling us to perform multi-way comparisons between different attributes on trust and compliance with the agent. Results from the 4x4x2x2 study show that behavior and anthropomorphism of the agent are the most significant factors in predicting the trust and compliance with the robot. Furthermore, adding a coalition-building preface, where the agent provides context to why it might make errors while giving advice, leads to an increase in trust for specific behaviors of the agent.
In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave. Some of today's robots model humans as if they were also robots, and assume users are always optimal. Other robots account for human limitations, and relax this assumption so that the human is noisily rational. Both of these models make sense when the human receives deterministic rewards: i.e., gaining either $100 or $130 with certainty. But in real-world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty-and in these settings, humans exhibit a cognitive bias towards suboptimal behavior. For example, when deciding between gaining $100 with certainty or $130 only 80% of the time, people tend to make the risk-averse choice-even though it leads to a lower expected gain! In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory and enable robots to leverage this model during human-robot interaction (HRI). In our user studies, we offer supporting evidence that the Risk-Aware model more accurately predicts suboptimal human behavior. We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration. Overall, we extend existing rational human models so that collaborative robots can anticipate and plan around suboptimal human behavior during HRI.
Humans and robots will increasingly collaborate in domestic environments which will cause users to encounter more failures in interactions. Robots should be able to infer conversational failures by detecting human users' behavioural and social signals. In this paper, we study and analyse these behavioural cues in response to robot conversational failures. Using a guided task corpus, where robot embodiment and time pressure are manipulated, we ask human annotators to estimate whether user affective states differ during various types of robot failures. We also train a random forest classifier to detect whether a robot failure has occurred and compare results to human annotator benchmarks. Our findings show that human-like robots augment users' reactions to failures, as shown in users' visual attention, in comparison to non-human-like smart-speaker embodiments. The results further suggest that speech behaviours are utilised more in responses to failures when non-human-like designs are present. This is particularly important to robot failure detection mechanisms that may need to consider the robot's physical design in its failure detection model.
Interactive learning technologies, such as robots, increasingly find their way into schools. However, more research is needed to see how children might work with such systems in the future. This paper presents the unsupervised, four month deployment of a Robot-Extended Computer Assisted Learning (RECAL) system with 61 children working in their own classroom. Using automatically collected quantitative data we discuss how their usage patterns and self-regulated learning process developed throughout the study.
Numerous studies in social psychology have shown that familiarization across repeated interactions improves people's perception of the other. If and how these findings relate to human-robot interaction (HRI) is not well understood, even though such knowledge is crucial when pursuing long-term interactions. In our work, we investigate the persistence of first impressions by asking 49 participants to play a geography game with a robot. We measure how their perception of the robot changes over three sessions with three to ten days of zero exposure in between. Our results show that different perceptual dimensions stabilize within different time frames, with the robot's competence being the fastest to stabilize and perceived threat the most fluctuating over time. We also found evidence that perceptual differences between robots with varying levels of humanlikeness persist across repeated interactions. This study has important implications for HRI design as it sheds new light on the influence of robots' embodiment and interaction abilities. Moreover, it also impacts HRI theory as it presents novel findings contributing to research on the uncanny valley and robot perception in general.
This paper suggests that humans value robotic systems as signals - costly, visible commitments that can secure access to preferred resources. This contrasts with HRI research, design, engineering and deployment, which have focused on robots' instrumental value - namely how designing and interacting with them may produce more or less productivity. Drawing on a multiyear ethnography of a "failed" robotic telepresence deployment in a teaching hospital, this paper shows that robots' signaling value can significantly outweigh - and even contravene - any practical utility that they may provide through use. This analysis further suggests that - unlike nontechnological organizational signals - robots' signaling value is highly contingent on the observability of their use. Considering robots' signaling value in complex social systems such as organizations promises improved robotic systems development and deployment techniques and improved prediction regarding human-robot interaction.
Public perceptions of Robotics and Artificial Intelligence (RAI) are important in the acceptance, uptake, government regulation and research funding of this technology. Recent research has shown that the public's understanding of RAI can be negative or inaccurate. We believe effective public engagement can help ensure that public opinion is better informed. In this paper, we describe our first iteration of a high throughput in-person public engagement activity. We describe the use of a light touch quiz-format survey instrument to integrate in-the-wild research participation into the engagement, allowing us to probe both the effectiveness of our engagement strategy, and public perceptions of the future roles of robots and humans working in dangerous settings, such as in the off-shore energy sector. We critique our methods and share interesting results into generational differences within the public's view of the future of Robotics and AI in hazardous environments. These findings include that older peoples' views about the future of robots in hazardous environments were not swayed by exposure to our exhibit, while the views of younger people were affected by our exhibit, leading us to consider carefully in future how to more effectively engage with and inform older people.
The generalizability of empirical research depends on the reproduction of findings across settings and populations. Consequently, generalizations demand resources beyond that which is typically available to any one laboratory. With collective interest in the joint Simon effect (JSE) -- a phenomenon that suggests people work more effectively with humanlike (as opposed to mechanomorphic) robots -- we pursued a multi-institutional research cooperation between robotics researchers, social scientists, and software engineers. To evaluate the robustness of the JSE in dyadic human-robot interactions, we constructed an experimental infrastructure for exact, lab-independent reproduction of robot behavior. Deployment of our infrastructure across three institutions with distinct research orientations (well-resourced versus resource-constrained) provides initial demonstration of the success of our approach and the degree to which it can alleviate technical barriers to HRI reproducibility. Moreover, with the three deployments situated in culturally distinct contexts (Germany, the U.S. Midwest, and the Mexico-U.S. Border), observation of a JSE at each site provides evidence its generalizability across settings and populations.
In the Republic of Kazakhstan, the transition from Cyrillic to Latin alphabet raises challenges to training an entire population in writing the new script. This paper presents a CoWriting Kazakh system, an extension of the existing CoWriter system, aiming to implement an autonomous social robot that would assist children in transition from the old Cyrillic alphabet to a new Latin alphabet. With the aim to investigate which learning strategy yields better learning gains, we conducted an experiment with 67 children, aged 8-11 years old, who interacted with a robot in a CoWriting Kazakh learning scenario. Participants were asked to teach a humanoid NAO robot how to write Kazakh words using one of the scripts, Latin or Cyrillic. We hypothesized that a scenario in which the child is asked to mentally convert the word to Latin would be more effective than having the robot perform conversion itself. Results show that the CoWriter was successfully applied to this new script-switching task. The findings also suggest interesting gender differences in the preferred method of learning with the robot.
JESSIE is a robotic system that enables novice programmers to program social robots by expressing high-level specifications. We employ control synthesis with a tangible front-end to allow users to define complex behavior for which we automatically generate control code. We demonstrate JESSIE in the context of enabling clinicians to create personalized treatments for people with mild cognitive impairment (MCI) on a Kuri robot, in little time and without error. We evaluated JESSIE with neuropsychologists who reported high usability and learnability. They gave suggestions for improvement, including increased support for personalization, multi-party programming, collaborative goal setting, and re-tasking robot role post-deployment, which each raise technical and sociotechnical issues in HRI. We exhibit JESSIE's reproducibility by replicating a clinician-created program on a TurtleBot~2. As an open-source means of accessing control synthesis, JESSIE supports reproducibility, scalability, and accessibility of personalized robots for HRI.
We previously introduced a responsive joint attention system that uses multimodal information from users engaged in a spatial reasoning task with a robot and communicates joint attention via the robot's gaze behavior. An initial evaluation of our system with adults showed it to improve users' perceptions of the robot's social presence. To investigate the repeatability of our prior findings across settings and populations, here we conducted two further studies employing the same gaze system with the same robot and task but in different contexts: evaluation of the system with external observers and evaluation with children. The external observer study suggests that third-person perspectives over videos of gaze manipulations can be used either as a manipulation check before committing to costly real-time experiments or to further establish previous findings. However, the replication of our original adults study with children in school did not confirm the effectiveness of our gaze manipulation, suggesting that different interaction contexts can affect the generalizability of results in human-robot interaction gaze studies.
A growing number of studies use a "ghost-driver" vehicle driven by a person in a car seat costume to simulate an autonomous vehicle. Using a hidden-driver vehicle in a field study in the Netherlands, Study 1 (N = 130) confirmed that the ghostdriver methodology is valid in Europe and confirmed that European pedestrians change their behavior when encountering a hidden-driver vehicle. As an important extension to past research, we find pedestrian group size is associated with their behavior: groups look longer than singletons when encountering an autonomous vehicle, but look for less time than singletons when encountering a normal vehicle. Study 2 (N = 101) adapted and extended the hidden-driver method to test whether it is believable as online video stimuli and whether car characteristics and participant feelings are related to the beliefs and behavior of pedestrians who see hidden-driver vehicles. As expected, belief rates were lower for hidden-driver vehicles seen in videos compared to in a field study. Importantly, we found noticing no driver was the only significant predictor of belief in car autonomy, which reinforces prior justification for the use of the ghostdriver method. Our contributions are a replication of the hidden-driver method in Europe and comparisons with past US and Mexico data; an extension and evaluation of the ghostdriver method in video form; evidence of the necessity of the hidden driver in creating the illusion of vehicle autonomy; and an extended analysis of how pedestrian group size and feelings relate to pedestrian behavior when encountering a hidden-driver vehicle.
We developed a novel gamified system for post-stroke long-term rehabilitation, using the humanoid robot Pepper (SoftBank, Aldebaran). Here, we present a participatory-design study with insights from both expert clinicians and from stroke patients who underwent a long-term intervention with the robot. We first present the results of a qualitative study with expert clinicians (n=12) on the compatibility of this system with the needs of post-stroke patients, and then the preliminary results of a long-term intervention study with post-stroke participants (n=4) in a rehabilitation facility. Both the clinicians and the patients found the robot and the gamified system engaging, motivating and meeting the needs of upper limb rehabilitation. The clinicians gave specific recommendations that may be applicable to a wide range of technologies for post-stroke rehabilitation.
Several studies have been reported on the use of social robots for dementia care. These robots have been used for diverse tasks such as for companionship, as an exercise coach, and as daily life assistant. However, most of these studies have assessed impact on participants only at the time when the interaction takes place rather than their medium or long-term effects. In this work, we report on a nine-week study conducted in a nursing home in which a autonomous social robot, called Eva, acts as facilitator of a cognitive stimulation therapy (CST). During the study, eight persons with dementia interacted with the robot in a group session which included elements of music therapy, reminiscence, cognitive games, and relaxation. Using the Neuropsychiatric Inventory - Nursing Home version (NPI-NH), we analyzed the impact of the therapy guided by the robot. The results show a statistically significant decrease in the total score of NPI-NH. Also, three dementia-related symptoms: delusions, agitation/aggression, and euphoria/exaltation, show a statistically significant decrease after the intervention. In addition, a qualitative analysis on interviews conducted with caregivers shows that all participants exhibits positive short-term effects after the session and provides insights on why some changes in behavior prevailed beyond the therapy sessions. Our results provide evidence that a social robot could play a role in improving the quality of life of persons with dementia.
Autonomy and the ability to maintain social activities can be challenging for people with disabilities experiencing reduced mobility. In the case of disabilities that impact mobility, power wheelchairs can help such people retain or regain autonomy. Nonetheless, driving a power wheelchair is a complex task that requires a combination of cognitive, visual and visuo-spatial abilities. In practice, people need to pass prior ability tests and driving training before being prescribed a power wheelchair by their therapist. Still, conventional training in occupational therapy can be insufficient for some people with severe cognitive and/or visuo-spatial functions. As such, these people are often prevented from obtaining a power wheelchair prescription from their therapist due to safety concerns. In this context, driving simulators might be efficient and promising tools to provide alternative, adaptive, flexible, and safe training. In previous work, we proposed a Virtual Reality (VR) driving simulator integrating vestibular feedback to simulate wheelchair motion sensations. The performance and acceptability of a VR simulator rely on satisfying user Quality of Experience (QoE). Therefore, our simulator is designed to give the user a high Sense of Presence (SoP) and low Cybersickness. This paper presents a pilot study assessing the impact of the vestibular feedback provied on user QoE. Participants were asked to perform a driving task whilst in the simulator under two conditions: with and without vestibular feedback. User QoE is assessed through subjective questionnaires measuring user SoP and cybersickness. The results show that vestibular feedback activation increases SoP and decreases cybersickness. This study constitutes a mandatory step before clinical trials and, as such, only enrolled people without disabilities.
A robot-assisted feeding system can potentially help a user with upper-body mobility impairments eat independently. However, autonomous assistance in the real world is challenging because of varying user preferences, impairment constraints, and possibility of errors in uncertain and unstructured environments. An autonomous robot-assisted feeding system needs to decide the appropriate strategy to acquire a bite of hard-to-model deformable food items, the right time to bring the bite close to the mouth, and the appropriate strategy to transfer the bite easily. Our key insight is that a system should be designed based on a user's preference about these various challenging aspects of the task. In this work, we explore user preferences for different modes of autonomy given perceived error risks and also analyze the effect of input modalities on technology acceptance. We found that more autonomy is not always better, as participants did not have a preference to use a robot with partial autonomy over a robot with low autonomy. In addition, participants' user interface preference changes from voice control during individual dining to web-based during social dining. Finally, we found differences on average ratings when grouping the participants based on their mobility limitations (lower vs. higher) that suggests that ratings from participants with lower mobility limitations are correlated with higher expectations of robot performance.
We reveal the process of children engaging in such serious abuse as kicking and punching robots. In study 1, we established a process model of robot abuse and used a qualitative analysis method specialized for time-series data: the Trajectory Equifinality Model (TEM). With the TEM method, we analyzed interactions from nine children who committed serious robot abuse from which we developed a multi-stage model: the abuse escalation model. The model has four stages: approach, mild abuse, physical abuse, and escalation. For each stage, we identified social guides (SGs), which are influencing events that fuel the stage. In study 2, we conducted a quantitative analysis to examine the effect of these SGs. We analyzed 12 hours of data that included 522 children who visited the observed area nearby the robot, coded their behaviors, and statistically tested whether the presence of each SG promoted the stage. Our analysis confirmed the correlations of four SGs and children's behaviors: the presence of other children related a new child to approach the robot (SG1); mild abuse by another child related a child to do mild abuse (SG2); physical abuse by another child related a child to conduct physical abuse (SG3); and encouragement from others related a child to escalate the abuse (SG5).
As autonomous vehicles (AVs) become a reality on public roads, researchers and designers are beginning to see unexpected reactions from the public ranging from curiosity to vandalism. These behaviors are concerning, as AV platforms will need to know how to deal with people behaving unexpectedly or aggressively. We call this griefing of AVs, adopting the term from harassment in online gaming. We discuss several examples of griefing observed in on-road field studies using a Wizard-of-Oz driverless car. While Uber and Waymo have anecdotally mentioned vandalism towards AVs, we believe this to be the first public video available of AV griefing ranging from playful to aggressive. To stimulate discussion, we propose speculative design principles to address griefing.
Inspired by the benefits of human prosocial behavior, we explore whether prosocial behavior can be extended to a Human-Robot Interaction (HRI) context. More specifically, we study whether robots can induce prosocial behavior in humans through a 1x2 between-subjects user study (N=30) in which a confederate abused a robot. Through this study, we investigated whether the emotional reactions of a group of bystander robots could motivate a human to intervene in response to robot abuse. Our results show that participants were more likely to prosocially intervene when the bystander robots expressed sadness in response to the abuse as opposed to when they ignored these events, despite participants reporting similar perception of robot mistreatment and levels of empathy for the abused robot. Our findings demonstrate possible effects of group social influence through emotional cues by robots in human-robot interaction. They reveal a need for further research regarding human prosocial behavior within HRI.
Community, craft, and the vernacular in artificially intelligent systems take the position that everyone participating in society is an expert in our experiences within the community infrastructures, which inform the makeup of robotic entities. Though we may not be familiar with the jargon used in specialized professional contexts, we share the vernacular of who we are as people and communities and the intimate sense that we are being learned. We understand that our data and collaboration is valuable, and our ability to successfully cooperate with the robotic systems proliferating around is well served by the creation of qualitatively informed systems that understand and perhaps even share the aims and values of the humans they work with. Using her art practice, which interrogates a humanoid robot and seeks to create culturally specific voice interactive entities as a case in point, Dinkins examines how interactions between humans and robots are reshaping human-robot and human-human relationships and interactions. She ponders these ideas through the lens of race, gender, and aging. She argues communities on the margins of tech production, code, and the institutions creating the future must work to upend, circumvent, or reinvent the algorithmic systems increasingly controlling the world, including robotics, that maintain us.
Social robots and autonomous social agents are becoming more ingrained in our everyday lives. Interactive agents from Siri to Anki's Cozmo robot include the ability to tell jokes to engage users. This ability will build in importance as in-home social agents take on more intimate roles, so it is important to gain a greater understanding of how robots can best use humor. Stand-up comedy provides a naturally-structured experimental context for initial studies of robot humor. In this preliminary work, we aimed to compare audience responses to a robotic stand-up comedian over multiple performances that varied robot timing and adaptivity. Our first study of 22 performances in the wild showed that a robot with good timing was significantly funnier. A second study of 10 performances found that an adaptive performance was not necessarily funnier, although adaptations almost always improved audience perception of individual jokes. The end result of this research provides key clues for how social robots can best engage people with humor.
This paper describes our efforts to explore the design space of social interactions for a robot portrait photographer. Our human-centered design process involved professional and amateur photographers to better understand the social dimensions of subject-photographer interactions. This exploration then guided our design of a robot photographer, which employs humor to elicit spontaneous smiles during photography events. In a laboratory evaluation of our robot prototype, we found that the majority of the subjects considered the robot's humor to be comical and appreciated it. More spontaneous smiles were elicited by the robot when it delivered humorous content to its subjects than when it was not humorous. Our findings provide insights for the design of future social robot photographers.
This paper presents artistic work that comments on the exploitation of feminine gender performance in technology, from the point of view of a particular artist, enabled by a wearable robotic device that creates an onstage cyborg character. The piece, entitled "Babyface'' was created in residence in a robotics lab and has catalyzed the development of a motion-activated wearable robot. This paper reports on the creation of this piece, its accompanying device, along with initial responses that have been shared informally. Iterations of multiple flexible attachment structures, a clear plastic dress that references the hyperbolic representation of the female sex, as well as the electronic subsystems are presented alongside discussion of the somatic and choreographic investigations that accompanied prototype development. These fabricated elements alongside the actions of the performer and a soundscape that quotes statements made by real "female'' robots create an otherwordly, sad cyborg character that causes viewers to question their assumptions about and pressures on the female ideal. This work is an important first step in the development of a wearable robot with embodied connection to the performer and, eventually, audience members.
The application of social robots has recently been explored in various types of educational settings including music learning. Earlier research presented evidence that simply the presence of a robot can influence a person's task performance, confirming social facilitation theory and findings in human-robot interaction. Confirming the evaluation apprehension theory, earlier studies showed that next to a person's presence, also that person's social role could influence a user's performance: the presence of a (non-) evaluative other can influence the user's motivation and performance differently. To be able to investigate that, researchers need the roles for the robot which is missing now. In the current research, we describe the design of two social roles (i.e., evaluative role and non-evaluative role) of a robot that can have different appearances. For this, we used the SocibotMini: A robot with a projected face, allowing diversity and great flexibility of human-like social cue presentation. An empirical study at a real practice room including 20 participants confirmed that users (i.e., children) evaluated the robot roles as intended. Thereby, the current research provided the robot roles allowing to study whether the presence of social robots in certain social roles can stimulate practicing behavior and suggestions of how such roles can be designed and improved. Future studies can investigate how the presence of a social robot in a certain social role can stimulate children to practice.
The uncanny valley effect denotes a dip in the positive relation between a robot's human likeness and likeability. This paper provides first evidence that this design-guiding effect is not limited to humanoids, but extends to zoomorphic robots. In a first online survey, a diverse group of 235 participants rated the animal likeness of 140 robots. Three predictors for high or low animal likeness emerged: surface properties, such as joint visibility; facial properties, such as presence of a pupil; animal-specific properties, such as presence of snout. In a second online survey, 187 participants rated the likeability of 53 robots of varying degrees of animal likeness drawn from the first study. The relation between animal likeness and likeability followed a U-shaped function and showed an uncanny valley effect: robots high and low in animal likeness were preferred over those mixing realistic and unrealistic features. Besides theoretical implications tentative guidelines for the design of zoomorphic robots are discussed.
Since chronic loneliness is both a painful individual experience and an increasingly serious social problem, robot companions have emerged as a result of robotization of social work to confront this issue. We foresee that social robots will become pervasive in the near future. Thus, it is crucial to pinpoint the relationship between chronic experiences of loneliness (i.e., trait loneliness) and both anthropomorphism and acceptance of such artificial intelligent agents. Previous research demonstrated that experimentally induced state loneliness increases anthropomorphic inferences about nonhuman agents such as pets. However, in the present research we found that trait (vs. state) loneliness - a permanent personality disposition that is not easily relieved (vs. transitory experiences caused by circumstance, and easily relieved) - reduced participants' anthropomorphic tendencies and acceptance of a social robot (regardless of the form: a picture of the robot, an on-site robot, or direct interaction with the robot). In particular, believing that the robot lacks good "unique humanness" traits (i.e., Humble, Thorough, Organized, Broadminded, and Polite) is one reason why dispositionally lonely participants are less likely to anthropomorphize a robot, which further prompts reduced acceptance of it. This finding suggests that unique humanness, exemplifying secondary emotions, is vital, not only in interpersonal contexts, but in establishing connections with social robots.
It's widely accepted that a robot's embodiment plays an important role during human-robot interaction (HRI). While many studies have explored the effect of robot appearance, relatively little is known about how the texture and stiffness of the surface material, or what may be referred to as 'robot-skin', influences how the robot is perceived. Gaining improved understanding in this area may have direct and actionable consequences on robot design, since at present nearly all commercially available service robots have similar exterior surfaces composed of smooth, stiff materials, usually plastic. This study is framed around systematically investigating the type of textures that may be better suited for these robots. First, experiments were undertaken to classify the textural characteristics of 27 distinct materials which could potentially be used as a robot-skin. A representative subset of these materials was then selected for a second experiment that explored how the stiffness and tactile properties of the material influenced its perceived suitability for use on a service robot. The research found that people strongly preferred surface textures that were soft, rather than stiff. The most suitable material stiffness was found to be context dependent; soft options were preferred in the blind test condition, but for cases where participants were presented with the 3D image of a service robot in an immersive virtual reality environment, medium stiffness materials were preferred. In the final part of the study, we identified a range of textural properties that seem to correlate with high and low suitability for use on service robots. It is hoped that these findings are useful to help inform the design of future HRI systems, and motivate further investigation into the social roles of robot-skin.
Service robots often perform their main functions in public settings, interacting with more than one person at a time. How these robots should handle the affairs of individual users while also behaving appropriately when others are present is an open question. One option is to design for flexible agent embodiment: letting agents take control of different robots as people move between contexts. Through structured User Enactments, we explored how agents embodied within a single robot might interact with multiple people. Participants interacted with a robot embodied by a singular service agent, agents that re-embody in different robots and devices, and agents that co-embody within the same robot. Findings reveal key insights about the promise of re-embodiment and co-embodiment as design paradigms as well as what people value during interactions with service robots that use personalization.
How should a robot that collaborates with multiple people decide upon the distribution of resources (e.g. social attention, or parts needed for an assembly)? People are uniquely attuned to how resources are distributed. A decision to distribute more resources to one team member than another might be perceived as unfair with potentially detrimental effects for trust. We introduce a multi-armed bandit algorithm with fairness constraints, where a robot distributes resources to human teammates of different skill levels. In this problem, the robot does not know the skill level of each human teammate, but learns it by observing their performance over time. We define fairness as a constraint on the minimum rate that each human teammate is selected throughout the task. We provide theoretical guarantees on performance and perform a large-scale user study, where we adjust the level of fairness in our algorithm. Results show that fairness in resource distribution has a significant effect on users' trust in the system.
Team member inclusion is vital in collaborative teams. In this work, we explore two strategies to increase the inclusion of human team members in a human-robot team: 1) giving a person in the group a specialized role (the 'robot liaison') and 2) having the robot verbally support human team members. In a human subjects experiment (N = 26 teams, 78 participants), groups of three participants completed two rounds of a collaborative task. In round one, two participants (ingroup) completed a task with a robot in one room, and one participant (outgroup) completed the same task with a robot in a different room. In round two, all three participants and one robot completed a second task in the same room, where one participant was designated as the robot liaison. During round two, the robot verbally supported each participant 6 times on average. Results show that participants with the robot liaison role had a lower perceived group inclusion than the other group members. Additionally, when outgroup members were the robot liaison, the group was less likely to incorporate their ideas into the group's final decision. In response to the robot's supportive utterances, outgroup members, and not ingroup members, showed an increase in the proportion of time they spent talking to the group. Our results suggest that specialized roles may hinder human team member inclusion, whereas supportive robot utterances show promise in encouraging contributions from individuals who feel excluded.
Communication among team members is important for efficient teamwork, to coordinate behavior and ensure that all team members have the information they need to complete the task. To enable effective communication and thus efficient teamwork, we propose a multi-agent planning approach to revealing information based on its benefit to joint team performance. By explicitly modeling the partner's knowledge and behavior, our approach allows a robot in a team to reason about when information is useful, how the communication is effective, and to communicate through efficient actions. That is, the robot provides only the necessary information for task completion, provides the information at the time that it is needed, and through the action(s) that optimizes team performance. We validated this approach in a human study in which participants walk together with a robot to a destination that is known only to the robot. We compared to a legible motion generation approach, and showed that users perceived our approach as more natural, socially appropriate, and fluent to team with, while being both more predictable and intent-clear. The ratings of our approach are equal or higher than legible motion across all 18 survey items.
Communication is critical to collaboration; however, too much of it can degrade performance. Motivated by the need for effective use of a robot's communication modalities, in this work, we present a computational framework that decides if, when, and what to communicate during human-robot collaboration. The framework, titled CommPlan, consists of a model specification process and an execution-time POMDP planner. To address the challenge of collecting interaction data, the model specification process is hybrid : where part of the model is learned from data, while the remainder is manually specified. Given the model, the robot's decision-making is performed computationally during interaction and under partial observability of human's mental states. We implement CommPlan for a shared workspace task, in which the robot has multiple communication options and needs to reason within a short time. Through experiments with human participants, we confirm that CommPlan results in the effective use of communication capabilities and improves human-robot collaboration.
In this paper, we investigate how collaborative robots, or cobots, typically composed of a robotic arm and a gripper carrying out manipulation tasks alongside human coworkers, can be enhanced with HRI capabilities by applying ideas and principles from character animation. To this end, we modified the appearance and behaviors of a cobot, with minimal impact on its functionality and performance, and studied the extent to which these modifications improved its communication with and perceptions by human collaborators. Specifically, we aimed to improve the Appeal of the robot by manipulating its physical appearance, posture, and gaze, creating an animal-like character with a head-on-neck morphology; to utilize Arcs by generating smooth trajectories for the robot arm; and to increase the lifelikeness of the robot through Secondary Action by adding breathing motions to the robot. In two user studies, we investigated the effects of these cues on collaborator perceptions of the robot. Findings from our first study showed breathing to have a positive effect on most measures of robot perception and reveal nuanced interactions among the other factors. Data from our second study showed that, using gaze cues alone, a robot arm can improve metrics such as likeability and perceived sociability.
To investigate whether a humanoid robot's use of gestures improves children's learning of second language vocabulary, and if variation in gestures strengthens this effect, we conducted a field study where a total of 94 children (aged 4-6 years old) played a language learning game with a NAO robot. The robot either used no gestures at all, repeated the same gesture every time a target word was presented, or produced a different gesture for each occurrence of a target word. We found that, contrary to what the majority of existing research suggests, the robot's use of gestures did not result in increased learning outcomes, compared to a robot that did not use gestures. However, engagement between child and robot was higher in both the repeated and varied gesture conditions, compared to the condition without gestures. An exploratory analysis showed that age played a role: the older children in the study learned more than the younger children when the robot used gestures. It is therefore important to carefully consider the design and application of robot gestures to support the learning process. The contribution of this work is twofold: it is a conceptual reproduction of a previous study, and we have taken first steps towards exploring the role of variation in gestures. The study was preregistered, and all materials are made publicly available.
This study presents a second language word learning experiment using a social robot with motivational strategies. These strategies were implemented in a social robot tutor to stimulate preschool children's intrinsic motivation. Subsequently, we investigated their effect on children's task engagement and word learning performance. The strategies were derived from the Self-Determination Theory, a well-known psychological theory that assumes that intrinsic motivation is strongly related to the fulfilment of three basic human needs, namely the need for autonomy, competence, and relatedness. We found an increase in the strength and duration of task engagement when all three psychological needs were supported by the robot. However, no significant results for learning gains were observed. Our intervention appears a promising method for improving child-robot interactions in educational settings, especially to sustain in long-term interactions.
Creativity is an intrinsic human ability with multiple benefits across lifespan. Despite its importance, societies not always are well equipped to provide contexts for creativity stimulation; as a consequence, a major decline in creative abilities occurs at the age of 7 years old. In this paper, we investigated the effectiveness of using a robotic system named YOLO, as an intervention tool to stimulate creativity in children. During the intervention, children used YOLO as a character for their stories and through the interaction with the robot, creative abilities were stimulated. Our study (n = 62) included 3 experimental conditions: i) YOLO displayed behaviors based on creativity techniques; ii) YOLO displayed behaviors based on creativity techniques plus social behaviors; iii) YOLO was turned off, not displaying any behaviors. We measured children's creative abilities at pre and posttesting and their creative process through behavior analysis. Results showed that the interaction with YOLO contributed to higher creativity levels in children, specifically contributing to the generation of more original ideas during story creation. This study shows the potential of using social robots as tools to empower intrinsic human abilities, such as the ability of being creative.
Prior work in affect-aware educational robots has often relied on a common belief that the relationship between student affect and learning is independent of agent behaviors (child's/robot's) or unidirectional (positive/negative but not both) throughout the entire student-robot interaction. We argue that the student affect-learning relationship should be interpreted in two contexts: (1) social learning paradigm and (2) sub-events within child-robot interaction. In our paper, we examine two different social learning paradigms where children interact with a robot that acts either as a tutor or a tutee. Sub-events within child-robot interaction are defined as task-related events occurring in specific phases of an interaction (e.g., when the child/robot gets a wrong answer). We examine sub-events at a macro level (entire interaction) and a micro level (within specific sub-events). In this paper, we provide an in-depth correlation analysis of children's facial affect and vocabulary learning. We found that children's affective displays became more predictive of their vocabulary learning when children interacted with a tutee robot who did not scaffold their learning. Additionally, children's affect displayed during micro-level events was more predictive of their learning than during macro-level events. Last, we found that the affect-learning relationship is not unidirectional, but rather is modulated by context, i.e., several affective states facilitated student learning when displayed in some sub-events but inhibited learning when displayed in others. These findings indicate that both social learning paradigm and sub-events within interaction modulate student affect-learning relationship.
While social robots for education are slowly being integrated in many scenarios, ranging from higher-education, through elementary school and kindergarten, the use case of robots for toddlers in their homes has not gained much attention. In this contribution, we introduce Patricc, a robotic platform that is specifically designed for toddler-parent-robot triadic interaction. It addresses the unique challenges of this age group, namely, desire for continuous physical interaction and novelty. Patricc's unique design enables changing characters by using dress-able puppets over a 3D-printed skeleton and the use of physical props. A novel authoring tool enables robot behavior and content creation by non-programmers. We conducted an evaluation study with 18 parent-toddler pairs and compared Patricc to similar tablet-based interactions. Our quantitative and qualitative analyses show that Patricc promotes significantly more triadic interaction, measured by video-coded gaze, compared to the tablet and that parents indeed perceive the interaction as triadic. Furthermore, there was no novelty-induced significant change in task-oriented behaviors, when toddlers interacted with two different characters consecutively. Finally, parents pointed out the benefits of changeable puppet-like characters over tablets and the appropriateness of the platform for the target age-group. These results suggest that Patricc can serve as the first gateway of toddlers to the emerging world of social robots.
In this paper we specify and validate three interaction design patterns for an interactive storytelling experience with an autonomous social robot. The patterns enable the child to make decisions about the story by talking with the robot, reenact parts of the story together with the robot, and recording self-made sound effects. The design patterns successfully support children's engagement and agency. A user study (N = 27, 8-10 y.o.) showed that children paid more attention to the robot, enjoyed the storytelling experience more, and could recall more about the story, when the design patterns were employed by the robot during storytelling. All three aspects are important features of engagement. Children felt more autonomous during storytelling with the design patterns and highly appreciated that the design patterns allowed them to express themselves more freely. Both aspects are important features of children's agency. Important lessons we have learned are that reducing points of confusion and giving the children more time to make themselves heard by the robot will improve the patterns efficiency to support engagement and agency. Allowing children to pick and choose from a diverse set of stories and interaction settings would make the storytelling experience more inclusive for a broader range of children.
We envision a future where service robots autonomously learn how to interact with humans directly from human-human interaction data, without any manual intervention. In this paper, we present a data-driven pipeline that: (1) takes in low-level data of a human shopkeeper interacting with multiple customers (28 hours of collected data); (2) autonomously extracts high-level actions from that data; and (3) learns -- without manual intervention -- how a robotic shopkeeper should respond to customers' actions online. Our proposed system for learning the interaction logic uses neural networks to first learn which customer actions are important to respond to and then learn how the shopkeeper should respond to those important customer actions. We present a novel technique for learning which customer actions are important by first learning the hidden causal relationship between customer and shopkeeper actions. In an offline evaluation, we show that our proposed technique significantly outperforms state-of-the-art baselines, in both which customer actions are important and how to respond to them.
Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.
As the capacity for machines to extend human capabilities continues to grow, the communication channels used must also expand. Allowing machines to interpret nonverbal commands such as gestures can help make interactions more similar to interactions with another person. Yet to be pervasive and effective in realistic scenarios, such interfaces should not require significant sensing infrastructure or per-user setup time. The presented work takes a step towards these goals by using wearable muscle and motion sensors to detect gestures without dedicated calibration or training procedures. An algorithm is presented for clustering unlabeled streaming data in real time, and it is applied to adaptively thresholding muscle and motion signals acquired via electromyography (EMG) and an inertial measurement unit (IMU). This enables plug-and-play online detection of arm stiffening, fist clenching, rotation gestures, and forearm activation. It also augments a neural network pipeline, trained only on strategically chosen training data from previous users, to detect left, right, up, and down gestures. Together, these pipelines offer a plug-and-play gesture vocabulary suitable for remotely controlling a robot. Experiments with 6 subjects evaluate classifier performance and interface efficacy. Classifiers correctly identified 97.6% of 1,200 cued gestures, and a drone correctly responded to 81.6% of 1,535 unstructured gestures as subjects remotely controlled it through target hoops during 119 minutes of total flight time.
Theories on social learning indicate that imitative choices are usually performed whenever copying the others' behaviour has no additional cost. Here, we extended such investigations of social learning to Human-Robot Interaction (HRI). Participants played the Economic Investment Game with a robot banker while observing another robot player also investing in the robot banker. By manipulating the robot banker payoff, three conditions of unfairness were created: (1) unfair payoff for the participants, (2) unfair payoff for the robot player and (3) unfair payoff for both. Results showed that when the payoff was low for the participants and high for the robot player, participants invested more money in the robot banker than when both parties received a low return. Also, for this specific condition, participants' investments increased further with a more interactive robot player (defined as demonstrating increased attention, congruent movements and speech) This suggests that social and cognitive human competencies can be used and transposed to non-human agents. Further, imitation can potentially be extended to HRI, with interactivity likely having a key role in increasing this effect.
The potential that robots offer to support humans in multiple aspects of our daily lives is increasingly acknowledged. Despite the clear progress in social robotics and human-robot interaction, the actual realization of this potential still faces numerous scientific and technical challenges, many of them linked to difficulties in dealing with the complexity of the real world. Achieving real-world human-robot interaction requires, on the one hand, taking into account and addressing real-world (e.g., stakeholder's) needs and application areas and, on the other hand, making our robots operational in the real world. In this talk, I will address some of the contributions that Embodied Artificial Intelligence can make towards this goal, illustrating my arguments with examples of my and my group's research on HRI using embodied autonomous affective robots in areas such as developmental robotics, healthcare, and computational psychiatry. So far little explored in HRI, Embodied AI, which started as an alternative to "symbolic AI" (a "paradigm change") in the way to conceive and model the notion of "intelligence" and the interactions of embodied agents with the real world, is highly relevant towards achieving "real-world HRI", with its emphasis on notions such as autonomy, adaptation, interaction with dynamic environments, sensorimotor loops and coordination, learning from interactions, and more generally, as Rodney Brooks put it, using and exploiting the real world as "its own best model".
This paper explores how humans interpret displays of emotion produced by a social robot in real world situated interaction. Taking a multimodal conversation analytic approach, we analyze video data of families interacting with a Cozmo robot in their homes. Focusing on one happy and one sad robot animation, we study, on a turn-by-turn basis, how participants respond to audible and visible robot behavior designed to display emotion. We show how emotion animations are consequential for interactional progressivity: While displays of happiness typically move the interaction forward, displays of sadness regularly lead to a reconsideration of previous actions by humans. Furthermore, in making sense of the robot animations people may move beyond the designer's reported intentions, actually broadening the opportunities for their subsequent engagement. We discuss how sadness functions as an interactional "rewind button" and how the inherent vagueness of emotion displays can be deployed in design.
Following affective turn in cognitive science, recent decades have witnessed an increasing interest toward the role of emotions in education. Ample evidence suggests that learners and teachers experience a variety of emotions, ranging from joy and pride to anger and frustration. However, when it comes to the design of affective behavior in robotic systems for education purposes, the emphasis has been predominantly on communication of positive emotions. While we recognize that positive emotions are fundamental to successful learning, in this paper we wish to make the case for the consideration of ambivalent emotions for the design of social robots for tutoring. To ground this proposal, we focus on the emotion of teachers' disappointment. First, we discuss under which conditions communicated teachers' disappointment, while it may be experienced as emotionally ambivalent by teachers and students, functions as an affiliating pedagogical strategy. We proceed to sketch out the methodological suggestions we consider relevant for future studies of communicated disappointment in human-robot interactions within learning contexts. We conclude with critical reflections about the ethics of responsible designs of such studies.
We propose a method for modifying affective robot movements using neural networks. Social robots use gestures and other movements to express their internal states. However, a robot's interactive capabilities are hindered by the predominant use of a limited set of preprogrammed or hand-animated behaviors, which can be repetitive and predictable, making sustained human-robot interactions difficult to maintain. To address this, we developed a method for modifying existing emotive robot movements by using neural networks. We use hand-crafted movement samples and a classifying variational autoencoder trained on these samples. Our method then allows for adjustment of affective movement features by using simple arithmetic in the network's latent embedding space. We present the implementation and evaluation of this approach and show that editing in the latent space can modify the emotive quality of the movements while preserving recognizability and legibility in many cases. This supports neural networks as viable tools for creating and modifying expressive robot behaviors.
The ability to clearly communicate a wide range of emotional states is considered a desirable trait for social robots. This research proposes that the Geneva Emotion Wheel (GEW), a self-report instrument for measuring emotional reactions, has strong potential for use as a tool for evaluating the expression of affective content by robots. Factors that make the GEW advantageous over existing evaluation methods include: ease of administration, reduction in the importance of word labels, and coverage of "no emotion" states. Statistical analyses of the GEW are proposed, isolating quantitative metrics of emotion distinctness. An experiment requiring participants to rate the perceived emotion of a social robot was conducted, employing the proposed methods. Analysis using the GEW revealed significant differences in the reliability of different expressions to clearly convey emotional states. The GEW provided a repeatable, systematic framework for estimating perceived affect of robot expression. Thus, the results suggest the GEW offers a powerful tool for design purposes as well as analysis. To support future research using the GEW, the software used for the analysis has been packaged and made available as an open-source resource to the community.
Telepresence robots act as the remote embodiments of human operators, enabling people to stay connected to friends, family, and coworkers over lengthy physical separations. However, the factors affecting how humans can best make use of such systems are not yet well understood. This paper explores the effects of personalization and relationship closeness on telepresence via two studies. Study 1 was a between-participants experiment that investigated telepresence robot personalization. 32 pairs of friends (N = 64) participated in the study's team-building-style activities and answered questions about robot operator presence. The results unexpectedly indicated that relationship closeness influenced the interaction experience more than any other considered predictor variable. To study closeness more rigorously as the central manipulation, we conducted Study 2, a between-participants experiment with 24 pairs (N = 48) and a similar procedure. Robot operators who reported a closer relationship with their teammate felt more present in this investigation. These findings can inform the design and application of telepresence robot systems to increase a remote operator's feelings of presence via robot.
In this paper, we design and evaluate a novel form of visually-simulated haptic feedback cue for communicating weight in robot teleoperation. We propose that a visuo-proprioceptive cue results from inconsistencies created between the user's visual and proprioceptive senses when the robot's movement differs from the movement of the user's input. In a user study where participants teleoperate a six-DoF robot arm, we demonstrate the feasibility of using such a cue for communicating weight in four telemanipulation tasks to enhance user experience and task performance.
In this paper, we study the effects of delays in a mimicry-control robot teleoperation interface which involves a user moving their arms to directly show the robot how to move and the robot follows in real time. Unlike prior work considering delays in other teleoperation systems, we consider delays due to robot slowness in addition to latency in the onset of movement commands. We present a human-subjects study that shows how different amounts and types of delays have different effects on task performance. We compare the movements under different delays to reveal the strategies that operators use to adapt to delay conditions and to explain performance differences. Our results show that users can quickly develop strategies to adapt to slowness delays but not onset latency delays. We discuss the implications of our results for the future development of methods designed to mitigate the effects of delays.
Researchers have proposed models of curiosity as a means to drive robots to learn and adapt to their environments. While these models balance goal- and exploration-oriented actions in a mathematically principled manor, it is not understood how users perceive a robot that pursues off-task actions. Motivated by a model of curiosity based on intrinsic rewards, we conducted three online video-surveys with a total of 264 participants, evaluating a variety of curious behaviors. Our results indicate that a robot's off-task actions are perceived as expressions of curiosity, but that these actions lead to a negative impact on perceptions of the robot's competence. When the robot explains or acknowledges its deviation from the primary task, this can partially mitigate the negative effects of off-task actions.
Interacting physically with robots and sharing environment with them leads to situations where humans and robots have to cross each other in narrow corridors. In these cases, the robot has to make space for the human to pass. From observation of human-human crossing behaviours, we isolated two main factors in this avoiding behaviour: body rotation and sliding motion. We implemented a robot controller able to vary these factors and explored how this variation impacted on people's perception. Results from a within-participants study involving 23 participants show that people prefer a robot rotating its body when crossing them. Additionally, a sliding motion is rated as being warmer. These results show the importance of social avoidance when interacting with humans.
Human partners are very effective at coordinating in space and time. Such ability is particular remarkable considering that visual perception of space is a complex inferential process, which is affected by individual prior experience (e.g. the history of previous stimuli). As a result, two partners might perceive differently the same stimulus. Yet, they find a way to align their perception, as demonstrated by the high degree of coordination observed in sports or even in everyday gestures as shaking hands. Robots would need a similar ability to align with their partner's perception. However, to date there is no knowledge of how the inferential mechanism supporting visual perception operates during social interaction. In the current work, we use a humanoid robot to address this question. We replicate a standard protocol for the quantification of perceptual inference in a HRI setting. Participants estimated the length of a set of segments presented by the humanoid robot iCub. The robot behaved in one condition as a mechanical arm driven by a computer and in another condition as an interactive, social partner. Even if the stimuli presented were the same in the two conditions, length perception was different when the robot was judged as an interactive agent rather than a mechanical tool. When playing with the social robot, participants relied significantly less on stimulus history. This result suggests that the brain changes optimization strategies during interaction and lay the foundations to design human-aware robot visual perception.
A key capability of morally competent robots is to reject or question potentially immoral human commands. However, robot rejections of inappropriate commands must be phrased with great care and tact. Previous research has shown that failure to calibrate the "face threat" in a robot's command rejection to the severity of the norm violation in the command can lead humans to perceive the robot as inappropriately harsh and can needlessly decrease robot likeability. However, it is well-established that gender plays a significant role in determining linguistic politeness norms and that people have a powerful natural tendency to gender robots. Yet, the effect of robotic gender presentation on these noncompliance interactions is not well understood. We present an experiment that explores the effects of robot and human gender on perceptions of robots in noncompliance interactions, and find evidence of a complicated interplay between these gendered factors. Our results suggest that (1) it may be more favorable for a male robot to reject commands than for a female robot to do so, (2) it may be more favorable to reject commands given by a male human than by a female human, and (3) that robots may be perceived more favorably when their gender matches that of human interactants and observers.
Human-robot interaction (HRI) research aims to design natural interactions between humans and robots. Intonation, a social signaling function in human speech investigated thoroughly in linguistics, has not yet been studied in HRI. This study investigates the effect of robot speech intonation in four conditions (no intonation, focus intonation, end-of-utterance intonation, or combined intonation) on conversational naturalness, social engagement, and people's humanlike perception of the robot collecting objective and subjective data of participant conversations (n = 120). Our results showed that humanlike intonation partially improved subjective naturalness but not observed fluency, and that intonation partially improved social engagement but did not affect humanlike perceptions of the robot. Given that our results mainly differed from our hypotheses based on human speech intonation, we discuss the implications and provide suggestions for future research to further investigate conversational naturalness in robot speech intonation.
The service industry is facing an increase in the number of malicious customers (customers with unreasonable complaints). Employees have reported that handling unreasonable complaints is particularly stressful. Considering the recent push for workplace automation, we should have robots handling this task in place of humans. We propose a robot behavioral model designed for handling unreasonable complaints. The robot with this model has to "please the customer'' without proposing a settlement. From a large survey of Japanese workers conducted by labor unions and an interview survey of experienced workers we conducted, we identified the conventional complaint handling flow as 1) listen to the complaint, 2) confirm the content of the complaint, 3) apologize, 4) give an explanation, and 5) conclude. The proposed behavioral model is a variation of this flow that takes into account the "state of mind'' of the customer. In particular, the robot with this model does not leave the first step and keeps asking questions until the customer is "ready to listen''. We conducted a user study, using a Wizard-of-Oz approach, to compare the proposed behavioral model to a baseline one implementing the conventional flow. We replicated in our laboratory the situation of a customer in a mobile phone shop. The proposed behavioral model was significantly better at making the customers believe that the robot listened to them and tried to help.
People take to social media to share their thoughts, joys, and sorrows. A recent popular trend has been to support and mourn people and pets that have died as well as other objects that have suffered catastrophic damage. As several popular robots have been discontinued, including the Opportunity Rover, Jibo, and Kuri, we are interested in how language used to mourn these robots compares to that to mourn people, animals, and other objects. We performed a study in which we asked participants to categorize deidentified Twitter reactions as referencing the death of a person, an animal, a robot, or another object. Most reactions were labeled as being about humans, which suggests that people use similar language to describe feelings for animate and inanimate entities. We used a natural language toolkit to analyze language from a larger set of tweets. A majority of tweets about Opportunity included second-person ("you") and gendered third-person pronouns (she/he versus it), but terms like "R.I.P" were reserved almost exclusively for humans and animals. Our findings suggest that people verbally mourn robots similarly to living things, but reserve some language for people.
A study was conducted to investigate the effects of category labels of domestic robots on their consumer acceptance. The authors posited that compared to the label robots, a pre-existent category label such as home appliances would increase the consumers' evaluation of and purchase intention towards the products. It is suggested that the pre-existent category label helps consumers to perceive the functional values they stand to gain by consuming the product more than the label robots, which is often related to the concepts generated around cultural artifacts. The results of the study confirmed the hypotheses, and further discussions are provided in this paper.
Mind perception in robots has been an understudied construct in human-robot interaction (HRI) compared to similar concepts such as anthropomorphism and the intentional stance. In a series of three experiments, we identify two factors that could potentially influence mind perception and moral concern in robots: how the robot is introduced (framing), and how the robot acts (social behaviour). In the first two online experiments, we show that both framing and behaviour independently influence participants' mind perception. However, when we combined both variables in the following real-world experiment, these effects failed to replicate. We hence identify a third factor post-hoc: the online versus real-world nature of the interactions. After analysing potential confounds, we tentatively suggest that mind perception is harder to influence in real-world experiments, as manipulations are harder to isolate compared to virtual experiments, which only provide a slice of the interaction.
Social robots interacting with users in real-life environments will often show surprising or even undesirable behavior. In this paper we investigate whether a robot's ability to self-explain its behavior affects the users' perception and assessment of this behavior. We propose an explanation model based on humans' folk-psychological concepts and test different explanation strategies in specifically designed HRI scenarios with robot behaviors perceived as intentional, but differently surprising or desirable. All types of explanation strategies increased the understandability and desirability of the behaviors. While merely stating an action had similar effects as giving a reason for it (an intention or need), combining both in a causal explanation helped the robot to better justify its behavior and to increase its understandability and desirability to a larger extent.
Enabling diverse users to program robots for different applications is critical for robots to be widely adopted. Most of the new collaborative robot manipulators come with intuitive programming interfaces that allow novice users to compose robot programs and tune their parameters. However, parameters like motion speeds or exerted forces cannot be easily demonstrated and often require manual tuning, resulting in a tedious trial-and-error process. To address this problem, we formulate tuning of one-dimensional parameters as an Active Learning problem where the learner iteratively refines its estimate of the feasible range of parameter values, by selecting informative queries. By executing the parametrized actions, the learner gathers the user's feedback, in the form of directional answers ("higher,'' "lower,'' or "fine''), and integrates it in the estimate. We propose an Active Learning approach based on Expected Divergence Maximization for this setting and compare it against two baselines with synthetic data. We further compare the approaches on a real-robot dataset obtained from programs written with a simple Domain-Specific Language for a robot arm and manually tuned by expert users (N=8) to perform four manipulation tasks. We evaluate the effectiveness and usability of our interactive tuning approach against manual tuning with a user study where novice users (N=8) tuned parameters of a human-robot hand-over program.
We explore first-person demonstration as an intuitive way of producing task demonstrations to facilitate user-centric robotic assistance. First-person demonstration directly captures the human experience of task performance via head-mounted cameras and naturally includes productive viewpoints for task actions. We implemented a perception system that parses natural first-person demonstrations into task models consisting of sequential task procedures, spatial configurations, and unique task viewpoints. We also developed a robotic system capable of interacting autonomously with users as it follows previously acquired task demonstrations. To evaluate the effectiveness of our robotic assistance, we conducted a user study contextualized in an assembly scenario; we sought to determine how assistance based on a first-person demonstration (user-centric assistance) versus that informed only by the cover image of the official assembly instruction (standard assistance) may shape users' behaviors and overall experience when working alongside a collaborative robot. Our results show that participants felt that their robot partner was more collaborative and considerate when it provided user-centric assistance than when it offered only standard assistance. Additionally, participants were more likely to exhibit unproductive behaviors, such as using their non-dominant hand, when performing the assembly task without user-centric assistance.
This paper addresses the problem of training a robot to carry out temporal tasks of arbitrary complexity via evaluative human feedback that can be inaccurate. A key idea explored in our work is a kind of curriculum learning---training the robot to master simple tasks and then building up to more complex tasks. We show how a training procedure, using knowledge of the formal task representation, can decompose and train any task efficiently in the size of its representation. We further provide a set of experiments that support the claim that non-expert human trainers can decompose tasks in a way that is consistent with our theoretical results, with more than half of participants successfully training all of our experimental missions. We compared our algorithm with existing approaches and our experimental results suggest that our method outperforms alternatives, especially when feedback contains mistakes.
Reinforcement learning (RL) has achieved tremendous success as a general framework for learning how to make decisions. However, this success relies on the interactive hand-tuning of a reward function by RL experts. On the other hand, inverse reinforcement learning (IRL) seeks to learn a reward function from readily-obtained human demonstrations. Yet, IRL suffers from two major limitations: 1) reward ambiguity - there are an infinite number of possible reward functions that could explain an expert's demonstration and 2) heterogeneity - human experts adopt varying strategies and preferences, which makes learning from multiple demonstrators difficult due to the common assumption that demonstrators seeks to maximize the same reward. In this work, we propose a method to jointly infer a task goal and humans' strategic preferences via network distillation. This approach enables us to distill a robust task reward (addressing reward ambiguity) and to model each strategy's objective (handling heterogeneity). We demonstrate our algorithm can better recover task reward and strategy rewards and imitate the strategies in two simulated tasks and a real-world table tennis task.