We describe the implementation and evaluation of a public interactive robotic art installation in a rehabilitation hospital. The project had two goals; to provide an enjoyable and novel artistic experience for the hospital community, and to better understand how human-centred robotics, particularly a receptive-focused intervention, might promote wellbeing and quality of life for members of hospital communities. By evaluating the experiences of the participants and stakeholders, the value of the installation for participants was assessed. This work contributes relevant insight towards the development of future art installations within the health jurisdiction and more broadly. The data also informs the ongoing discussion concerning the potential role of social and therapeutic robots in health care settings.
Can or should a robot ever engage in a financial transaction with a human? If so, how? How about an enforceable agreement? Blockchain technology has enabled the development of cryptocurrencies, smart contracts, and unlocked a plethora of other disruptive technologies. But, beyond its use case in cryptocurrencies, and in network coordination, blockchain technology may have serious sociotechnical implications in the future co-existence of robots and humans. Motivated by the recent explosion of interest around blockchains, and our extensive work on open-source blockchain technology and its integration into robotics - this paper addresses these questions and provides insights into how blockchains and other decentralized technologies can impact our interactions with robots and enable the social integration of robots into human society.
As personal social robots become more prevalent, the need for the designs of these systems to explicitly consider pets become more apparent. However, it is not known whether dogs would interact with a social robot. In two experiments, we investigate whether dogs respond to a social robot after the robot called their names, and whether dogs follow the 'sit' commands given by the robot. We conducted a between-subjects study (n = 34) to compare dogs' reactions to a social robot with a loudspeaker. Results indicate that dogs gazed at the robot more often after the robot called their names than after the loudspeaker called their names. Dogs followed the 'sit' commands more often given by the robot than given by the loudspeaker. The contribution of this study is that it is the first study to provide preliminary evidence that 1) dogs showed positive behaviors to social robots and that 2) social robots could influence dog's behaviors. This study enhance the understanding of the nature of the social interactions between humans and social robots from the evolutionary approach. Possible explanations for the observed behavior might point toward dogs perceiving robots as agents, the embodiment of the robot creating pressure for socialized responses, or the multimodal (i.e., verbal and visual) cues provided by the robot being more attractive than our control condition.
It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as "fabrication by imitation" (the titular mechanical "bull"), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act?
Introducing robots into the home changes the work that the homeowners carry out. This paper explores in depth how garden work and the garden change when a robotic lawn mower is introduced. The methodology in this study is autoethnography, which gives access to personal experiences and thoughts. The paper describes how the usual work of manually mowing the lawn is automated and new tasks emerge as the gardeners adapt to the robot mower. A conceptual framework is presented and used to analyse these changes. Some gardening tasks become redundant and new tasks appear, and a new urgency is added to some of the old tasks. In addition, awareness about the robot mower's movements is important to keep it active and avoid damage to the robot and things in the garden. The paper suggests that unwanted changes that become too demanding are important for user acceptance or rejection of robots in the home environment.
As robots become more prevalent, the importance of the field of human-robot interaction (HRI) grows accordingly. As such, we should endeavor to employ the best statistical practices. Likert scales are commonly used metrics in HRI to measure perceptions and attitudes. Due to misinformation or honest mistakes, most HRI researchers do not adopt best practices when analyzing Likert data. We conduct a review of psychometric literature to determine the current standard for Likert scale design and analysis. Next, we conduct a survey of four years of the International Conference on Human-Robot Interaction (2016 through 2019) and report on incorrect statistical practices and design of Likert scales. During these years, only 3 of the 110 papers applied proper statistical testing to correctly-designed Likert scales. Our analysis suggests there are areas for meaningful improvement in the design and testing of Likert scales. Lastly, we provide recommendations to improve the accuracy of conclusions drawn from Likert data.
In this study, we propose a telework "avatar work" that enables people with disabilities to engage in physical works such as customer service in order to realize an inclusive society, where we can do anything if we have free mind, even though we are bedridden. In avatar work, disabled people can remotely engage in physical work by operating a proposed robot "OriHime-D" with a mouse or gaze input depending on their own disabilities. As a social implementation initiative of avatar work, we have opened a two-week limited avatar robot cafe and have evaluated remote employment by people with disabilities using OriHime-D. As the results by 10 people with disabilities, we have confirmed that the proposed avatar work leads to mental fulfillment for people with disparities, and can be designed with adaptable workload. In addition, we have confirmed that the work content of the experimental cafe is appropriate for people with a variety of disabilities seeking social participation. This study contributes to fulfillment all through life and lifetime working, and at the same time leads to a solution to the employment shortage problem.
A future world populated by robots is a projection that has long inspired and still inspires science-fiction stories. As their complexity increase people can help but compare to these artificial entities and why not question their own "human nature". While it is easy to assess the superiority of humans on many dimensions, there is one, central to all humans that is not in favor of our kind: the mortality. In this study, we investigate how individuals react toward a robot that makes this comparison dimension stand out and how it may define attitudes towards robots and pro/antisocial behaviors towards them. Also, we evaluate the role of robot anthropomorphism and the spiritual thoughts activation as mediators of the above-mentioned process. Our study's results demonstrate that facing a robot that confronts people with their own mortality results in less positive attitudes towards robots, and a higher likelihood of acting negatively toward them. Also, this particular robot tends to energize more spiritual thoughts among participants and less human traits attribution. Finally, we show that spirituality may be paradoxically associated with more anthropomorphism and more negative attitude showing the ambiguous multi-component stance of this dimension. These results are discussed in terms of attitudes towards robots, Terror Management Theory, social comparison process and implications for future human-robot interaction (HRI)
In order to research conversational factors in robot design the use of Wizard of Oz (WoZ) experiments, where an experimenter plays the part of the robot, are common. However, for conversational systems using a synthetic voice, it is extremely difficult for the experimenter to choose open domain content and enter it quickly enough to retain conversational flow. In this demonstration we show how voice puppetry can be used to control a neural TTS system in almost real time. The demo hopes to explore the limitations and possibilities of such a system for controlling a robot's synthetic voice in conversational interaction.
Robotics is a very diverse field with robots of different sizes and sensory configurations created with the purpose of carrying out different tasks. Different robots and platforms each require their own software ecosystem and are coded with specific algorithms which are difficult to translate to other robots.
Digital Manufacturing (DM) broadly refers to applying digital information to enhance manufacturing processes, supply chains, products and services. In past work we proposed a low-cost DM architecture, supporting flexible integration of legacy robots. Here we discuss a demo of our architecture using an HRI scenario.
In the Digital Manufacturing on a Shoestring project we focus on low-cost digital solution requirements for UK manufacturing SMEs. This paper shows that many of these fall in the HRI domain while presenting the use of low-cost and off-the-shelf technologies in two demonstrators based on voice assisted production.
CardBot is a cardboard based programmable humanoid robot platform designed for inexpensive and rapid prototyping of Wizard of Oz interactions in HRI incorporating technologies such as Arduino, Android and Unity3d. The table demonstration showcases the design of the CardBot and its wizard controls such as animating the movements, coordinating speech and gaze etc for orchestrating an interaction.
Various types of modular robotic kits such as the Lego Mindstorm [1], edutainment robot kit by ROBOTIS [2], and the interactive face components, FacePartBot [3] have been developed and suggested to increase children's creativity and to learn robotic technologies. By adopting a modular design scheme, these robotic kits enable children to design various robotic characters with plenty of flexibility and creativity, such as humanoids, robotic animals, and robotic faces. However, because a robot is an artifact that perceives an environment and responds to it accordingly, it can also be characterized by the environment it encounters. Thus, in this study, we propose a modular robotic kit that is aimed at creating an interactive environment for which a robot produces various responses. We chose intelligent tapes to build the environment for the following reasons: First, we presume that decreasing the expectations of consumers toward the functionalities of robotic products may increase their acceptance of the products, because this hinders the mismatch between the expected functions based on their appearances, and the actual functions of the products [4]. We believe that the tape, which is found in everyday life, is a perfect material to lower the consumers' expectation toward the product and will be helpful for the consumer's acceptance of it. Second, the tape is a familiar and enjoyable material for children, and it can be used as a flexible module, which users can cut into whatever size they want and can be attached and detached with ease. In this study, we developed a modular robotic kit for creating an interactive environment, called the TapeBot. The TapeBot is composed of the main character robot and the modular environments, which are the intelligent tapes. The main character robot consists of a main board, an RFID reader, a tilt sensor, and a speaker. The main character robot detects the RFIDs embedded in the intelligent tapes on which various images of environments, such as grass, water waves, and roads are printed and generates the corresponding sound. In addition, different sounds are produced according to the specific settings of an environment by using the tilt sensor. For example, when the grass printed tape is attached to a flat floor, a sheep's bleating sound is generated, which indicates a lawn at low altitude. On the other hand, when the grass printed tape is attached to a slope, the mountain birds' chirping sound is generated, which indicates a mountain at high altitude. Although previous robotic kits focused on building a robot, the TapeBot allows its users to focus on the environment that the robot encounters. By reversing the frame of thinking, we expect that the TapeBot will promote children's imagination and creativity by letting them develop creative environments to design the interactions of the main character robot.
There are many challenges when it comes to deploying robots remotely including lack of situation awareness for the operator, which can lead to decreased trust and lack of adoption. For this demonstration, delegates interact with a social robot who acts as a facilitator and mediator between them and the remote robots running a mission in a realistic simulator. We will demonstrate how such a robot can use spoken interaction and social cues to facilitate teaming between itself, the operator and the remote robots.
The PlantBot is a home device that shows iconographic or simple lights to depict actions that it requests a young person (its user) to do as part of Behavioral Activation therapy. In this initial prototype, a separate conversational speech agent (i.e., Amazon Alexa) is wizarded to act as a second system the user can interact with.
Special Operations Forces (SOF) are facing extreme risks when prosecuting crimes in uncharted environments like buildings. Autonomous drones could potentially save officers' lives by assisting in those exploration tasks, but an intuitive and reliable way of communicating with autonomous systems is yet to be established. This paper proposes a set of gestures that are designed to be used by SOF during operation for interaction with autonomous systems.
We developed a method for modifying emotive robot movements with a reduced dependency on domain knowledge by using neural networks. We use hand-crafted movements for a Blossom robot and a classifying variational autoencoder to adjust affective movement features by using simple arithmetic in the network's learned latent embedding space. We will demonstrate the workflow of using a graphical interface to modify the valence and arousal of movements. Participants will be able to use the interface themselves and watch Blossom perform the modified movements in real time.
This interdisciplinary project aims to assess and manage the risks relating to the transition of Kazakh language from Cyrillic to Latin in Kazakhstan in order to address challenges of a) teaching and motivating children to learn a new script and its associated handwriting, and b) training and providing support for all demographic groups, in particular senior generation. We present the system demonstration that proposes to assist and motivate children to learn a new script with the help of a humanoid robot and a tablet with stylus.
Social robots might be more effective if they could adapt in playful, comedy-inspired ways based on heard social cues from users. Jon the Robot, a robotic stand-up comedian from the Oregon State University CoRIS Institute, showcases how this type of ability can lead to more enjoyable interactions with robots. We believe conference attendees will be both entertained and informed by this novel demonstration of social robotics.
We present Crowd of Oz (CoZ) -- a crowd powered system to enable real time dialogue between a Pepper robot and a user. CoZ includes media rich interfaces and advanced architecture to swiftly elicit crowd responses for the Pepper robot. We have empirically demonstrated that users can be engaged in a beneficial discussion with the crowd-powered robot, that acts as a coach to mitigate stress.
The aim of this study is to understand users' experience and their perceived privacy, while interacting with a crowd-operated social robot. We conducted a between-subjects user study, wherein the robot broadcasts both audio and video to crowd workers in one condition, as opposed to broadcasting only the participants' audio cues in the other condition. A sample of 14 students took part in this study, and was divided into 2 groups (video and No-Video group). Participants were asked to use the help of a crowd-operated Pepper robot to find their next holiday destination. Once the interaction was completed, participants assessed the social intelligence, user experience and privacy aspects of the robot in both conditions. No significant differences were experienced by participants regarding social intelligence and user experience across both conditions. Interestingly, less privacy was perceived by the group with an audio-only broadcast feed compared to the audio-video feed.
The central research question of this work is can robot motioneffectively communicate distinct robot personalities? In this study,we implemented three distinct robot motion personalities inspiredby a subset of the seven dwarfs: Happy, Sleepy, and Grumpy. Weimplemented autonomous motion generation systems that mappedeach personality to path shape, timing, and seeking/avoidance ofthe participant features. A user study demonstrated that our 24participants could distinguish these personalities. Robot motionstyle predicted robot personality features such as politeness, friend-liness, and intelligence, which, for the most part, matched logicallyto the intended dwarf personality designs. These results indicatethat robot motion style is sufficient to indicate a robot's personalityduring its interactive behaviors with people.
This work explores the use of the Cozmo robot to deliver mathematics education. Recently, we see a lot of work presented on using robots for education but few works centred around the age group of children between 14-17 years of age in combination with a non-humanoid robot. Reflecting on this limitation, Cozmo autonomously delivered engaging material and exercises to learners. We studied the subjective ratings of young learners' knowledge gains with Cozmo on the topics of algebra, geometry and trigonometry. We found that participants' subjective rating on their knowledge changed significantly after the interaction. This implied a positive influence of employing Cozmo with this age group and also reflected on the need to do more research in schools with affordable non-humanoid, autonomous social robots.
We report on a user study that sought to understand how users program robot tasks by direct demonstration and what problems they encounter when using a state-of-the-art robot programming interface to create and edit robot programs. We discuss how our findings translate to design opportunities in end-user robot programming.
Recent studies in human-human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions that are beneficial to the expresser and also help to foster cordiality and closeness amongst interlocutors. However, efforts in human-robot interaction (HRI) have not attempted to investigate the consequences of expression of negative emotion by robots on HRI. Thus, the background of this study as a first step is to furnish humanoid robots with natural audio-visual anger expression for HRI. Based on the analysis results from a multimodal HHI corpus, we implemented different types of gestures related to anger expressions for humanoid robots and carried-out subjective evaluation of the generated anger expressions. Findings from this study revealed that the semantic context and functional content of anger-based utterances play a significant role in the choice of gesture to accompany such utterance. Our current result shows that "Pointing" gesture is adjudged more appropriate for utterances with "you" and anger-based "questioning" utterances; while "both arms spread" and "both arm swing" gestures were evaluated more appropriated for "declarative" and "disagreement" utterances respectively.
We evaluate the effectiveness of the type of directions given by a robot in a joint-activity task. The joint-activity task we focus on involves a humanoid robot giving directions to a human subject in exploring an unknown environment and finding objects hidden in certain locations inside the environment. The experiment is conducted in the form of a between-group study with 8 participants in which one group is provided directions by the robot verbally and the other group is given nonverbal directions in the form of hand gestures and gaze direction. Our findings show that the type of directions given by the robot do have a noticeable impact on the performance of the human subjects in the experiment. Specifically, the group which is given nonverbal directions by the robot is on average 70% quicker in finding all the hidden objects than the group which is given verbal directions. We evaluate the hypothesis from prior research that the reason for this discrepancy is correlated with the reduced perceived mental workload of the activity by the human subjects when the robot provides nonverbal directions, with the help of post-experiment survey responses provided by the participants of our study groups.
To investigate the reproducibility of the uncanny valley and its robustness to exposure to robotic technologies, we carried out a direct reproduction of Strait et al. (2015, 2017) and conceptual reproduction of Zł otowski et al. (2017). Consistent with prior findings, we observed an uncanny valley in participants' responses to agents ranging in appearance from mechanomorphic to highly humanlike to human. Specifically, participants exhibited particular aversion to and avoidance of highly humanlike robots relative to their less humanlike counterparts and humans. However, prior exposure via a 1-hour interaction with the NAO robot ($N=52$) was not found to significantly affect participants' responding relative to that of participants without any exposure to robotic technologies ($N=34$). The present work thus provides further indication that the uncanny valley reliably manifests, both with and without exposure.
In this paper, we present the design of a visual feedback mechanism using Augmented Reality, which we call augmented visual cues, to assist pick-and-place tasks during robot control. We propose to augment the robot operator's visual space in order to avoid attention splitting and increase situational awareness (SA). In particular, we aim to improve on the SA concepts of perception, comprehension, and projection as well as the overall task performance. For that, we built upon the interaction design paradigm proposed by Walker et al.. On the one hand, our design augments the robot to support picking-tasks; and, on the other hand, we augment the environment to support placing-tasks. We evaluated our design in a first user study, and results point to specific design aspects that need improvement while showing promise for the overall approach, in particular regarding user satisfaction and certain SA concepts.
Adapting behaviors based on others' reactions is a fundamental skill for a social robot that must interact with people. In this work, we to develop a systematic method to collect ecologically plausible data of human reactions to robot behaviors and associated valence. We designed a dyadic interaction were 24 participants played a board game in a human-robot team for a chance to win a chocolate. The ''Grumpy robot'' is responsible for losing an easy-to-win game, while the ''Kind robot'' for winning a seemingly impossible-to-win game. Questionnaires show that participants recognize both robots' critical impact on the game's outcome, but show similar social attraction towards both. Videos' reactions are distinct: smiles and neutral faces to the ''Kind robot'', and laughter, confusion, or shock to the ''Grumpy robot''. Collected data will be used to teach the robot to understand human reactions.
Personality is a vital factor in understanding acceptance, trust or emotional attachment to a robot. From R2D2 to WALL-E, there is a rich history of robots in films using semantic free utterances (SFUs) sounds (such as squeaks, clicks and tones) as audio gestures to communicate and convey emotion and personality. However, unlike in a film where an actor can pretend to understand non-verbal noises, in a practical application synthesised speech is often used to communicate information, intention and status. Here we present a pilot study exploring the impact of mixing speech synthesis with SFUs on perceived personality. In a listening test, subjects were presented with synthesised short utterances with and without SFUs, together with a picture of the agent as a tabletop social robot or a young man. Both picture and SFUs had an impact on perceived personality. However, no interaction was seen between SFUs and picture suggesting the listeners failed to fuse the perceptions of the two audio elements, perceiving the SFUs as background noises rather than audio gestures generated by the agent.
Trust in automated driving systems is crucial for effective driver-(semi)autonomous vehicles interaction. Drivers that do not trust the system appropriately are not able to leverage its benefits. This study presents a mixed design user experiment where participants conducted a non-driving task while traveling in a simulated semi-autonomous vehicle with forward collision alarm and emergency braking functions. Occasionally, the system missed obstacles or provided false alarms. We varied these system error types as well as road shapes, and measured the effects of these variations on trust development. Results reveal that misses are more harmful to trust development than false alarms, and that these effects are strengthened by operation on risky roads. Our findings provide additional insight into the development of trust in automated driving systems, and are useful for the design of such technologies.
ABSTRACT In this paper, we investigate the relationship between situation awareness and compliance in human-robot interaction. We carried out a between-subject experiment (N=30) in which a robot inter- rupts people doing solitary work to do small physical exercises. In one condition the robot displays an awareness to the activity participants are currently engaged in. In the control condition the robot displays no such awareness. Results show that participants initially ignore the robot in both condition, over time participants interacting with the 'aware' robot comply more often with the requests and as a result complete more exercises
This work focuses on methods to improve mobile robot legibility in factories using lights. Implementation and evaluation were done at a robotics company that manufactures factory robots that work in human spaces. Three new sets of communicative lights were created and tested on the robots, integrated into the company's software stack and compared to the industry default lights that currently exist on the robots. All three newly designed light sets outperformed the industry default. Insights from this work have been integrated into software releases across North America.
A burgeoning area of HRI explores how robots can be used to assist older adults who are dealing with health issues. However, much of this work focuses on aiding older adults that are living with dementia or other serious health-related problems. In this work, we focus on robots helping otherwise-healthy older adults living with social isolation and loneliness. We created an initial robot behavior design, which leverages techniques from therapy, to provide emotional support through basic conversational ability.
The aim of this study was to investigate whether robot morphology (i.e., anthropomorphic, zoomorphic, or caricatured) influences children's perceptions of animacy, anthropomorphism, social presence, and perceived similarity. Based on a sample of 35 children aged seven to fourteen, we found that, depending on the robot's morphology, children's perceptions of anthropomorphism, social presence and perceived similarity varied, with the anthropomorphic robot typically ranking higher than the zoomorphic robot. Our findings suggest that the morphology of social robots should be taken into account when planning, analyzing, and interpreting studies on child-robot interaction.
Touch is central to communication and social interaction. For both humans and robots touch is a mode through which they sense the world. A second wave of industrial robots is reshaping how touch operates within the labor process. Recent studies have turned their attention to the role of touch in Human-Robot Interaction (HRI). While these studies have produced useful knowledge in relation to the affective capacities of robotic touch, methods remain restrictive. This paper contributes to expanding research methods for the study of robotic touch. It reports on the design of an ongoing ethnography that forms part of the InTouch project. The interdisciplinary project takes forward a socially orientated stance and is concerned with how technologies shape the semiotic and sensory dimensions of touch in the 'real world'. We contend that these dimensions are key factors in shaping how humans and robots interact, yet are currently overlooked in the HRI community. This multi-sited sensory ethnography research has been designed to explore the social implications of robotic touch within industrial settings (e.g. manufacturing and construction).
In this paper, we present DeepTaxi, an extension to an existing autonomous RC car platform that allows for the dynamic learning of an environment. DeepTaxi employs a social scaffolding approach where a human user supervises and initially provides feedback to the car so that it can learn the names and order of various objects located around a track. Once it sufficiently learns about the environment, DeepTaxi can then autonomously navigate to any desired location without the need for human assistance. We test DeepTaxi with human participants on a custom made track with a variety of objects/orders. We find that it can successfully learn about and navigate the track with the participants expressing appreciation for the timeliness of the car's communication.
During the past decade soft robotics has emerged as a growing field of research. In this paper we present exploratory research on sound design for soft robotics with potential applications within the human-robot interaction domain. We conducted an analysis of the sounds made by imaginary soft creatures in movies. Drawing inspiration from the analysis, we designed a soft robotic prototype that features real-time sound generation based on FM synthesis.
Progress in robots' application to everyday scenarios has increased the interest in human-robot interaction (HRI) research. However, robots' limited social skills are associated with decreased humans' positive attitude during HRI. Here, we put forward the idea of developing adaptive Theory of Mind (ToM) model-based systems for social robotics, able to deal with new situations and interact with different users in new tasks. Therefore, we grouped current research from developmental psychology debating the computational processes underlying ToM for HRI strategy development. Defining a model describing adaptive ToM processes may in fact aid the development of adaptive robotic architectures for more flexible and successful HRI. Finally, we hope with this report to both further promote the cross-talk between the fields of developmental psychology and robotics and inspire future investigations in this direction.
Considering a human is not only solving a task, but actively teaching how to solve it to a robot has not been extensively explored and is an important step to improve LfD algorithms. We explored the difference between solving and teaching in a sensorimotor task. In a first experiment participants first solved a continuous maze task and gave demonstrations for a robot afterwards. While teaching the participants could give negative demonstrations (how not to solve the task). In a second experiment we asked new participants to rate how informative they perceive the demonstrations from the first experiment. The results show that significantly more demonstrations from the Teaching-phase are perceived as informative than from the Solving-phase. Furthermore, significantly more negative than positive demonstrations were perceived as informative.
This paper is the first step of an attempt to equip social robots with emotion recognition capabilities comparable to those of humans. Most of the recent deep learning solutions for facial expression recognition under-perform when deployed in Human-Robot-Interaction scenarios, although they are capable of breaking records on the most varied benchmarks on facial expression recognition. The main reason for that we believe is that they are using techniques that are developed for recognition of static pictures, while in real-life scenarios, we infer emotions from intervals of expression. Utilising on the feature of CNN to form regions of interests that are similar to human gaze patterns, we use recordings from human-gaze patterns to train such a network to infer facial emotions from 3 seconds video footage of humans expressing 6 basic emotions.
In this work, we study and model how two factors of human cognition, trust and attention, affect the way humans interact with autonomous vehicles. We develop a probabilistic model that succinctly captures how trust and attention evolve across time to drive behavior, and present results from a human-subjects experiment where participants interacted with a simulated autonomous vehicle while engaging with a secondary task. Our main findings suggest that trust affects attention, which in turn affects the human's decision to intervene with the autonomous vehicle.
Companion robots have potential for improving wellbeing within aged care, however literature focuses on shorter-term studies often using relatively expensive platforms, raising concerns around novelty effects and economic viability. Here, we report ecologically valid diary data from two supported living facilities for older people with dementia or learning difficulties. Both sites implemented Joy for All robot animals and maintained diaries for six months. Entries were analysed using thematic analysis. We found robot use increased over the six months, changing from short, structured sessions to mainly permanent availability. Thus previously reported concerns on novelty were not warranted. Both sites reported positive outcomes including reminiscence, improved communication and potential wellbeing benefits (reduced agitation/anxiety). Incidences of negative responses included devices described as 'creepy.' Devices appeared sufficiently robust for prolonged daily use with multiple users. Overall, we provide insight into real-world implementation of affordable companion robots, and longitudinal development of use.
As the world's population is rapidly ageing, social robots are being developed to help patients manage their chronic conditions at home. An important task for robots is to remind people of their medications. Although medication management systems aim to simplify the process of programming robots, these systems often suffer from usability issues that increase data entry errors. This study aimed to investigate whether cosmetic and validation changes in a social robot medication management system (Robogen) improved system usability and reduced medication errors. Forty participants underwent a 60-minute study during which they entered prescriptions using both the old and new systems in a random order. System usability and preference were assessed using questionnaires. Results showed that the new system (Robogen 2) had significantly higher usability scores and was preferred by the significant majority of participants (80%). This new system has been adopted for programming the robot in subsequent healthcare studies.
To facilitate interaction of robots with people in public spaces it would be beneficial for them to use social behaviours: i.e. low-level behaviours that suggest the robot is a social agent. However, the implementation of such social behaviours would be difficult for novice users - i.e. non-roboticists. In this contribution, we present a high-level visual programming system that enables novices to design robot tasks which already incorporate social behavioural cues appropriate for the robot being programmed. A pilot study of this system in a museum involving members of the public designing guided tours demonstrated that the addition of the low-level social cues improve the perception of the robot and the effectiveness of the designed task behaviour. A number of areas of further exploration and development are highlighted.
Social robots can be used to motivate children to engage in learning activities in education. In such contexts, they might need to persuade children to achieve specific learning goals. We conducted an exploratory study with 42 children in a museum setting. Children were asked to play an interactive storytelling game on a touchscreen. A Furhat robot guided them through the steps of creating the character of a story in two conditions. In one condition, the robot tried to influence children's choices using high-controlling language. In the other, the robot left children free to choose and used a low-controlling language. Participants in the persuasive condition generally followed the indications of the robot. Interestingly, the use of high-controlling language did not affect children's perceived trust towards the robot. We discuss the important implications that these results may have when designing children-robot interactions.
This paper aims to improve the engagement in human-robot interactions by enabling social robots to call people by their name during a social interaction. To this end, we propose to integrate computer vision, online learning using a convolutional neural network, and speech technologies and concepts to learn names to facilitate and improve engagement. Our experiments show that human-robot engagement shows significant improvement through the proposed approach.
As robots are increasingly placed in direct interaction and often times in physical contact with people, understanding how touch by a robot influences interactions has become an important topic in HRI. Although prior research in HRI has shown robotic touch to elicit both positive and negative reactions, it remains an open question when and why touch is perceived as positive or negative. Here we apply expectancy violations theory (EVT) to shed light onto this question. We present an online study with N=142 participants that investigates the impact of context (touch after error vs. touch after no error) and robot appearance (social/animated face vs. non-social/blank screen) in shaping the perception of a robot's touch. We found that robot-initiated touch from a non-social robot was rated more positively after an error compared to ratings of the non-social robot touch after no error. Open-ended responses showed that people attach a wide array of meanings to the robot's touch, highlighting the importance of additional cues that are needed to ensure that people understand the intention of the touch.
In this position paper we report on a study of a Korean business that employs people with cognitive and developmental disabilities (DDs) across a variety of operations. The goal of the study was to contribute to the development of scenarios involving the use of a robotic platform to enhance the work-experience of the disabled employees. Based on our findings, we argue for the importance of understanding the broad organizational and bureaucratic properties of a business or workplace when devising HRI scenarios, and of bringing elements like business models, operating philosophy and organizational hierarchies directly into the design process.
This study examines how players describe their rationale behind decisions made during gameplay of Detroit: Become Human and how responses may coincide with character attachment (CA). Semi-structured interviews were conducted to examine the presence of character attachment and how it may lead to the player's understanding of their gameplay choices. Both the emotions (or lack of) of the avatar and strategizing about gameplay emerged as two central themes. Results are discussed in light of human-robot interaction based on popular media.
Moments of uncertainty are common for learners when practicing a second language. The appropriate management of these events could help avoid the development of frustration and benefit the learner's experience. Therefore, its detection is crucial in language practice conversations. In this study, an experimental conversation between an adult second language learner and a social robot is employed to visually characterize the learners' uncertainty. The robot's output is manipulated in prosody and lexical levels to provoke uncertainty during the conversation. These reactions are then processed to obtain Facial Action Units (AUs) and Gaze features. Preliminary results show distinctive behavioral patterns of uncertainty among the participants. Based on these results, a new annotation scheme is proposed, which will expand the data used to train sequential models to detect uncertainty. As future steps, the robotic conversational partner will use this information to adapt its behavior in dialogue generation and language complexity.
This study explores how different patterns of robot emotional behavior influences people's intentions to help a robot requiring human assistance. In this online study, participants saw a set of videos of a robot receiving human help to get free of an obstruction, with different emotional behavior in each video. These initial results suggest that people would be more willing to aid a robot that showed either happy or sad behavior while stuck, compared to one that remained neutral. There was no influence of the behavior shown by the robot after being freed. This suggests that it is the behavior of the robot while stuck that is most influential in people's perceptions and intentions to interact with the robot. Furthermore, the finding that positive behavior increased intention to help suggests that behavior patterns that are unusual by human standards may be available to robots to gain assistance, but further work is required to identify whether this translates to actual helping.
Feedback plays an important role in language learning. However, limited research can be found on the influence of feedback in robot-assisted language learning. Therefore, this study aims to identify the effects of robot-feedback on learning gain, motivation, and anthropomorphism. In total, 60 students participated in a language learning task, with a robot using one of three feedback conditions: reward, punishment, and no feedback. The results showed that feedback only affected learning gain: students learned more with punishment, followed by reward, compared to no feedback. Thus, our results underscore the importance of feedback in RALL.
In this study, we investigate the effect of a social robot using head and arm gestures mimicking the lexical tones (i.e. pitch gestures) on learning pronunciations and translations of Chinese characters. Performance was compared between two within-subjects conditions: Gesture Observation condition, in which the robot used pitch gestures to teach six characters and the No Gesture condition in which the robot did not use gestures to teach six characters. Participants (N = 21) were tested on how well they pronounced and translated the learned characters. The study showed that a robot not using gestures was found to enhance learning, but only when participants could first familiarize with learning Chinese characters with the robot using pitch gestures. These results suggested that prior knowledge of learning Chinese attained from a robot using pitch gestures improved recall on learning the characters during a learning module with a robot not using gestures.
The rehabilitation intervention in children with motor or cognitive impairments requires to be effectively measured in terms of engagement to understand the effects of the rehabilitation process and to enable more targeted interventions leading to improved experiences for both children and rehabilitation therapists. Indeed, the rehabilitation practice may be enhanced by attending to the client's signals of engagement in therapy. In this paper, we present the results of the validation of the PARE scale (Pediatric Assessment of Rehabilitation Engagement) that was designed to capture the dimensions of affective, cognitive, and behavioral engagement in the interaction of clients-rehabilitation providers. Results showed that the scale can be effectively used for the measurement of the different components of engagement in intervention settings to support therapeutic virtual scenarios or the application of robots to treat motor and cognitive impairments.
While feedback currently generates much interest among many scholars, how feedback is perceived in cross-cultural contexts has not been extensively studied yet, due to considerable methodological obstacles. In this study, we investigate how different ways of providing feedback are perceived by inhabitants of neighboring countries such as Denmark, Germany and Poland. Based on initial analyses of different feedback strategies in these countries, we used a robot to deliver both positive and negative feedback. Using a robot has the advantage that the feedback is provided by an embodied interactant, yet whose behavior can be completely controlled. We carried out a questionnaire study in which the EZ-bot presented feedback using strategies identified as common in either Denmark, Poland or Germany; participants were then asked to rate the robot. The results show highly significant differences in the perception of different feedback strategies even in countries in geographical proximity. Using robots for studying cross-cultural communication differences thus constitutes a promising methodology.
This short report presents a small-scale explorative study about children with Autism Spectrum Disorder (ASD) interaction with robots during clinical interactions. This is part of an ongoing project, which aims at defining a robotic service for supporting children with developmental disabilities and increase the efficiency of routine procedures that may create distress, e.g. having blood taken or an orthopaedic plaster cast applied. Five children with confirmed diagnoses of ASD interacted with two social robots: the small humanoid NAO and the pet-like MiRo. The encounters mixed play activities with a simulated clinical procedure. We included parents/carers in the interaction to ensure the child was comfortable and at ease. The results of video analysis and parents' feedback confirm possible benefits of the physical presence of robots to reduce children's anxiety and increase compliance with instructions. Parents/carers convincingly support the introduction of robots in hospital procedures to their help children.
A study at the Canada Science and Technology Museum with 121 child/parent pairs compared attitudes toward NAO and Jibo in child-robot interaction (CRI) scenarios. The study suggests that subjects (i) favor the robots roughly equally but for different reasons, (ii) are motivated more by autonomous web smartness than domain-specific knowledge, and (iii) prefer human-like gesturing over screen animation for the expression of emotion.
Research on language attitudes concerns the identification of the beliefs people hold towards speakers of a particular variety (for instance, a dialect) or towards speakers with a foreign accent. While researchers have been very creative in finding methods for determining speaker attitudes towards their own and others' linguistic productions, robots provide an excellent methodological tool to study language attitudes. We illustrate this methodology on the perception of transfer of speech melody from one's mother tongue to a second language. Our results show effects of such transfer on the perception of the respective speaker; for instance, Danish speakers may be perceived as dominant when transferring their intonation contours to German, whereas Germans may appear formal when transferring their speech melody into Danish.
As robots become prevalent, merely thinking of their existence may affect how people behave. When interacting with a robot, people conformed to the robot's answers more than to their own initial response [1]. In this study, we examined how robot affect conformity to other humans. We primed participants to think of different experiences: Humans (an experience with a human stranger), Robots (an experience with a robot), or Neutral (daily life). We then measured if participants conformed to other humans in survey answers. Results indicated that people conformed more when thinking of Humans or Robots than of Neutral events. This implies that robots have a similar effect on human conformity to other humans as human strangers do.
Researchers have explored how robot errors affect people's perceptions of and interactions with robots. However, the types of robot errors that have been studied often reflect errors that humans tend to make, instead of those typically made by robots. In this paper we explore robot-typical errors, as opposed to human-like errors, spearheading a discussion on the kinds of mistakes we may face from robots. We specifically focus on child-robot interaction, and how robot-typical errors may occur in the presence of children.
The field of human robot interaction (HRI) is still relatively new and often borrows methods and principles from the more established field of HCI. HRI researchers are adopting HCI methods, however, these methodologies may need slight modifications and adaptations in order to better investigate the unique challenges of working in HRI. In this paper, we present our findings which utilised one such method: Participatory Design (PD) workshop. We held the workshop in our assistive living lab with ten stroke survivors. This workshop aimed to explore the design of new socially assistive robotic technologies that could be used to support stroke survivors in the home environment. Some of our findings were unanticipated, which suggested that some adaptations to the existing framework for PD workshops need to be revised in order to be useful to HRI research.
A significant portion of the over 65 population experience a fall at least once a year in their home environment. This puts a faller under significant health risks and adds to the financial burden of the care services. This Late Breaking Report explores a proof of concept implementation of a fall alert system that uses MiRo (a small mobile social robot) in the home environment. We take advantage of MiRo's pet-like characteristics, small size, mobility, and array of sensors to implement a system where a person who has fallen can interact with it and summon help if needed. The initial aim of this proof of concept system described here was to act as a demonstration tool for health professionals and housing association representatives, gauging their needs and requirements, driving this research forward.
Our research aims at facilitating the design of 'Remotely Instructed Robots' for future glove-boxes in the nuclear industry. The two main features of such systems are: (1) They can automatically model the working environment and relay that information to the operator in virtual reality (VR). (2) They can receive instructions from the operator that are executed by the robot. However, the deficiency of these kind of systems is that they heavily rely on knowledge of expert programmers when the robot's capabilities or hardware are to be reconfigured, altered or upgraded. This late breaking report proposes to introduce a third important advancement on remotely instructed robots: (3) Intuitive programming modifications by operators who are non-programmers but have basic knowledge of hardware, and most importantly, have experience of the weaknesses in particular handling tasks.
Since children show favoritism of in-group members over out-group members from the age of five, children that newly arrive in a country or culture might have difficulties to be integrated into the already settled group. To address this problem, we developed a robot-mediated music mixing game for three players that aims to bring together children from the newly arrived and settled groups. We designed a game with the robot's goal in mind and allow the robot to observe the participation of the different players in real-time. With this information, the robot can encourage equal participation in the shared activity by prompting the least active child to act. Preliminary results show that the robot can potentially succeed in influencing participation behavior. These results encourage future work that not only studies the in-game effects but also effects on group dynamics.
What will the universal remote control of the near future look like? What form will the next generation of human-computer interfaces take? Will they be conspicuous interfaces within the built envi- ronment, like a computer screen or a smart speaker? Will they resemble the ubiquitous, portable rectangles that we all carry in our pockets? We propose a third paradigm: interfaces that hide in plain sight, inconspicuously integrated into the furniture always al- ready around us, but ready to be called upon when needed in order to establish a user interface. Our furniture-robot prototype - tbo, the TableBot - demonstrates the viability of this furniture-based human-computer paradigm.
In exhibitions by companies, there are many opportunities to conduct presentations to visitors using graphic slides. However, these presentations are hard work for the exhibitors because they must repeat the same presentation several times a day to inform as many visitors as possible. This paper introduces a presentation system that uses a robot rather than a human presenter. A case study is conducted in which the proposed system is experienced by 78 visitors in an actual exhibition; the goal is to consider whether the proposed system could be the possibility of an alternative to human presenters. The results indicate that presentations by real-world robots with appropriate non-verbal behaviors are more effective in terms of acceptability and understandability than alternatives.
The elderly are more affected by higher environmental temperatures. If they misperceive the temperature, it can lead to a number of potentially dangerous health issues. To address this, we propose a robot that sweats to indicate the high environmental temperature to the elderly. In this paper, we present the design of our first prototype for exploring the human perception of robot sweat status.
We present the main conclusions from an analysis of a cyborg design session in which sketches were created to brainstorm novel cyborg applications. We take these sketches and analyze them using image boards, which are a collage of pictures used to communicate a description of design aesthetics and intent. We used this technique to derive patterns and conclusions from the large set of sketches, which can inform the design of future cyborg applications. The sketches reveal that wearable arms are useful for reducing physical strain and multi-tasking, but should incorporate a ''rest'' mode in full-time applications.
This work explores three concepts relevant to the study of human-robot interaction: posture, setting and evaluation methods. The first concept is the importance of a robot's posture on its perceived interaction affordances. Early findings suggest that the same robot presented in different postural arrangements may significantly impact the way the interaction is perceived. Second, there is growing evidence to suggest the importance of situating interaction studies in-the-wild. We observed that the environment an interaction is situated in strongly affects the outcome, an indication that experiments constrained to the laboratory may not reveal useful social aspects relevant to understanding HRI fully. Finally, in order to conduct in-the-wild studies, we argue that current practice of using single-strand methods may not be sufficient; we instead explore a mixed-methods approach to study the complex social and environmental interplay between the robot, the participant and the bystanders.
In this paper, we present a user-study designed to examine the effect of reward/coercion persuasive strategies inspired by social power. We ran the study with 90 participants in a persuasion scenario in which they were asked to make a real choice to select a less-desirable option. The preliminary results indicated that the robot succeeded in persuading the users to select a less desirable choice compared to a better one. However, no difference was found in the perception of the robot regarding the persuasion strategies.
This paper outlines an interaction-centered and dynamically constructed episodic memory for social robots, in order to enable naturalistic, social human-robot interaction. The proposed model includes a record of multi-timescale events stored in the event history, a record of multi-timescale interval definitions stored as interaction episodes, and a set of links associating specific elements of the two records. The event history is constructed dynamically, depending on the occurrence of internal and external events. The interaction episodes are defined on the basis of robot-initiated and user-initiated interactions. The episodic memory is realised within a social human-robot interaction architecture, whose components generate events pertaining to the context and state of interaction.
Previous studies in cognitive science have pointed out how the top-down processing of interpersonal cognition plays a role in human interactions, and the same mechanisms are observed in interactions with robots. This study investigates how mental models about the decision-supporting robots used in court will influence jury behavior. A laboratory experiment was conducted using a simple jury decision-making task, where participants play the role of a jury and make decisions regarding the length of the sentence for a particular crime. During the task, a robot with expert knowledge provides suggestions regarding the length of the sentence, based on other similar cases. In one scenario, participants receive a lecture about a case-based reasoning system and proceed to the experiment. Statistical analysis show that there were no significant differences between the conditions however, some participants engaging in the condition with prior knowledge performed with higher conformity.
With the increasing use of social robots in real environments, one of the areas of research requiring more attention is the study of human-robot interaction (HRI) when a person and robot are moving close to each other. Understanding effective ways to design how a robot should communicate its intention during dynamic movement is based on what people's expectations are and how they interpret different cues from the robot. Building on the existing literature, we tested a range of non-verbal cues such as eye contact, gaze and head nodding as part of the robot's behaviour during close proximate passing. The research aimed to investigate the effects of these cues, as well as their combination with body posture, on the efficiency of passing and the quality of HRI. Our results show that the combination of eye contact and the robot turning sideways is the most effective and appropriate compared to other modalities.
This paper reports on a pilot study for investigating the relation between observable data from users and their trust estimation in Human Robot Collaboration. Two experiments have been set up that contain different situational risks (as one of the pre-requisites for trust investigation). Here we report on one of these. Preliminary results are promising and show a correlation between risk factors, observable behavior and subjective trust estimations.
We investigate the impact of warmth in robots' language on the perception of errors in a shopping assistance task (N=81) and found that error-free behavior was favored over erroneous if the dialogue is machine-like, while errors do not negatively impact liking, trust and acceptance if the robot uses human-like language. Warmth in robots' language thus seems to mitigate negative consequences and should be considered as a crucial design aspect.
Laboratory studies are time consuming and costly. We aimed to examine whether the gestures users naturally select to communicate commands to a mobile robot during a laboratory user study can be comparable to those selected during an online video-enhanced survey. 64 participants were divided into two experimental groups according to the interaction methodology. In both conditions, participants instructed the robot to perform eight different tasks using only upper body gestures. For 7 of the 8 tasks we did not find evidence that that the physical gestures by which the participants chose to communicate with the robot depended on the interaction methodology. Our investigation, while still preliminary, may suggest that video enhanced surveys can be used for human robot interaction design and evaluation, especially in the preliminary stages of defining users' existing mental models and expectations.
Little information is available regarding which types of failures robots experience in domestic settings. To further the community's knowledge, we manually classified 3062 customer reviews of robotic vacuum cleaners on Amazon.com. The resulting database was analyzed and used to create a Natural Language Processing (NLP) model capable of predicting whether a review contains a description of a failure or not. The current work describes the initial analysis and model development process as well as preliminary findings.
In an experimental lab study with a 2x2 between-subjects-design (N = 162), the aim was to examine how a negative expectancy violation caused by a social robot and its reward valence, which represents how desirable it is to interact with this robot, affect the evaluation of the robot and the interaction with it. The negative expectancy violation led to less positive evaluations of the interaction with the robot as well as its sociability and competence. The robot with a high reward valence evoked a more positive evaluation of the interaction with it as well as its sociability. Furthermore, when the robot had a low reward valence, an expectancy violation led participants to increasingly rate the robot's behavior as deviating from what they expected.
This study investigated human-robot cooperation in the context of prisoner's dilemma games and examined the extent to which people's willingness to cooperate with a robot would vary according to the incentives provided by a game context. We manipulated the payoff matrices of human-robot prisoner's dilemma games and predicted that people would cooperate more often in the situation where cooperating with the robot was a relatively more rewarding option. Our results showed that, in the early rounds of the game, participants made significantly more cooperative decisions, when the game structure providing more incentives for cooperation. However, their subsequent game decisions were dominantly driven by Cozmo's previous game choices and the incentive structure was no longer a predictive factor to their decisions. The findings suggest that people have a strong reciprocal tendency to social robots in economic games and this tendency might even surpass the influence of the reward value of their decisions.
This paper proposes a user-friendly teleoperation interface for manipulation. We provide the user with a view of the scene augmented with depth information from local cameras to provide visibility in occluded areas during manipulation tasks. This gives an improved sense of the 3D environment which results in better task performance. Further, we monitor the pose of the robot's end effector in real-time so that we are able to superimpose a virtual representation into the scene when parts are occluded. The integration of these features enables the user to perform difficult manipulation tasks when the action environment is normally occluded in the main camera view.
We performed preliminary studies with this new setup and users provided positive feedback regarding the proposed augmented reality (AR) system.
This paper presents the results of preliminary experiments in human-robot collaboration for an agricultural task.
Within the framework of Applied Behavior Analysis (ABA), a study at the Childen's Hospital of Eastern Ontario determines whether children on the autism spectrum are motivated by social robots as positive reinforcement during skill acquisition compared to traditional toys, candy, escapement, and affection. Using five robots and nine subjects, the study suggests that social robots are viable candidates as reinforcement in real-world ABA practice.
When judging humans, (formal) clothes play a vital role for the attribution of trust, competence and sympathy. Most social robots, however, appear unclothed and not much is known whether and how clothes can influence how a robot is perceived. In an experiment, participants experienced either a formally dressed, an informally dressed or an undressed robot and rated their experience on different questionnaires. Inconsistent with our expectations, the data revealed no influence of robot clothing on the experience of the robot. Possible reasons and implications for further studies are discussed.
Real-world studies allow for testing the limits of HRI systems and observing how people react to failures. We developed a fully autonomous personalised barista robot and deployed the robot on an international student campus for five days. We experienced several challenges, the most important one being speech recognition failures due to foreign accents. Nonetheless, these failures showed a different perspective on HRI, and we demonstrate how personalisation can overcome a negative user experience.
This work presents a method to identify children at risk for Autism Spectrum Disorder using behavioral data extracted from video analysis of child-robot interactions. Robots were used as a tool to elicit social engagement from the children in order to capture their social behaviors. A Convolutional Neural Network was used to classify the behavioral data as either at-risk ASD or Typical Development. The network performance was compared to two machine learning classifiers and the utility of the proposed method as a way to streamline existing diagnostic procedures was discussed.
In order to find out the home robot's services and functions that should be developed with priority, we created three scenarios for the robot services including cleaning, laundry, and cooking with detailed functions for each service. We investigated the effect of service types on service evaluation. Robot's laundry service was evaluated as the greatest positive compared to cleaning and cooking service. Furthermore, we explored which robot attributes or task attributes affected robot service evaluation and purchase intention for each function. By service types, different attributes of the robot and the task affected service evaluation and purchase intention.
The work presents a preliminary experiment aimed for comparing a traditional method of programming an industrial collaborative robot using a teach pendant, with a novel method based on augmented reality and interaction on a high-level of abstraction. In the experiment, three participants programmed a visual inspection task. Subjective and objective metrics are reported as well as selected usability-related issues of both interfaces. The main purpose of the experiment was to get initial insight into the problematic of comparing highly different user interfaces and to provide a basis for a more rigorous comparison, that is going to be taken out.
The Sound Settler is a utopian project which combines engineering and the arts. The utopian vision is to deploy an interactive music system on Mars before the first humans arrive. Two real robotic arms are controlled in a gravity compensated master-slave mode resembling the human part on Earth, and the robotic art piece which is envisioned to be located on Mars. The movements generate electronic music by charging physical capacitors and on screen a simulated third arm is deployed, visualizing the movements on the red planet. This report presents the artistic design, the implemented infrastructure and first subjective insights about human interactions with the art piece in public spaces. It motivates a more formal analysis and opens several research avenues for the future.
While social robots are designed to engage in socially interactive tasks, they may not always establish the intended social connection. We examined how people's experiences of succeeding in completing these interactive tasks influence attitudes toward social robots. People developed more positive attitudes toward social robots when they completed more tasks successfully. These findings highlight potential constraints of complex interactive tasks increasingly implemented in commercially available social robots. A trade-off may exist between early task success and the sustained training of complex social robots by their human social partners.
The demand for pet monitoring devices is growing due to the increasing number of one-person households raising pets. However, current monitoring methods using video camera entail various problems, which may lead to discontinued usage. To overcome this problem, we propose Petbe, a social robot that projects your own pet using a context-aware approach based on BLE beacons and Raspberry Pis. The corresponding smartphone application provides various robot status updates (robot head) and movements (robot body). With the development of Petbe, we conducted an exploratory study to verify the advancement of the above issues on monitoring user's own pets with the following factors: privacy concern, companionship, awareness, connectivity, and satisfaction. The outcomes indicate that Petbe helps to reduce privacy concerns and build companionship through empathetic interaction.
Grasping an object in the scene with multiple unknown objects requires both the knowledge of which object to be the target and the planning of grasping pose. However, teleoperating a robot hand to find a proper grasping pose in an unstructured environment is often too complicated and time-consuming to perform. In this study, we propose an aided-grasping algorithm which autonomously corrects the pose of a robot hand using an eye-in-hand camera. We used multiple cameras for a natural vision-based teleoperation interface with an aided-grasping algorithm. Experiments with objects placed in arbitrary positions and angles have shown a successful implementation of the algorithm.
This paper presents an experimental design to explore how inexperienced persons adapt their usage of a robot in a construction task after their first experience with a cooperative robot. In the experiment, 12 participants had to decide which task of the building plan should be carried out by the robot and which task they carry out by themselves. Although they did not use a coherent strategy, the participants were able to adapt their decision to the robot. As a result, they have significantly reduced the total time of completion within four repetitions.
Issues with learner engagement and interest in STEM and robotics fields may be resolved from an interdisciplinary education approach. We have developed a robot-theater framework using interactive robots as a way to integrate the arts into STEM and robotics learning. The present paper shows a breadth of our target populations, ranging from elementary school children to college students. Based on our experiences in conducting four different programs applied to different age groups, we discuss the characteristics, lessons, design considerations, and implications for future work.
In this paper we investigate if there is a relationship between incremental information presentation and persuasion, by testing if a robot can persuade a person to perform more tasks than the required ones. We designed a between-subject experiment where the robot Nao would ask the participant to perform a series of tasks. Our results show no significant difference between incremental and non-incremental conditions in Nao's ability to persuade participants, nor in likeability. However, Nao was able to get participants to stay longer than they intended to.
In this study, we analyzed the influence of an individual's anxiety level toward robots on the appearance tendency of the uncanny valley. We conducted a series of questionnaire surveys via crowdsourcing to evaluate mechano-humanness (MH) score, likability, and uncanniness of 80 robot face images. Thereafter, we divided the participants into two groups according to their scores of the anxiety toward robot scale (RAS). The results of the t-test of the approximate curves using the MH scores, likability scores, and uncanniness scores showed that the appearance tendency of the uncanny valley is affected by users' scores in the RAS. The individuals who are less anxious toward robots exhibited higher likability/lower uncanniness toward the 80 robot faces, whereas those with high anxiety exhibited lower likability/higher uncanniness toward the same faces, and the uncanny valley of the latter is deeper than that of the former.
With the increasing level of system autonomy, the Human-Robot Ratio (HRR) can potentially be increased for more efficient operations. The HRR can be used to inform the plausible team structure in the real-world operations. However the computation of HRR can be inferred from various sources, such as human and mission performances. A small-scale study was conducted to understand how these measurements could be interpreted and used to inform the optimal HRR. The result shows that the proposed HRR derived from the human and mission performance might not be the same, suggesting that various mission-centric and human-centric factors must be considered before an optimal HRR could be derived.
In the future, assistive social robots will collaborate with humans in a variety of settings. Robots will not only follow human orders but will likely also instruct users during certain tasks. Such robots will inevitably encounter user uncertainty and hesitations. They will continuously need to repair mismatches in common ground in their interactions with humans. In this work, we argue that social robots should instruct humans following the principle of least-collaborative-effort. Like humans do when instructing each other, robots should minimise information efficiency over the benefits of collaborative interactive behaviour. In this paper, we first introduce the concept of least-collaborative-effort in human communication and then discuss implications for design of instructions in human-robot collaboration.
Real-world learning interactions happening in classrooms, often involve children interacting simultaneously with teachers and peer learners. FRACTOS is a triadic learning interaction system which is structured to emphasize the learner on different phases of self-regulation such as planning, performance and reflection while engaging in a constructionist task of building fractions using virtual LEGO blocks with a virtual tutor and robot peer.
We argue that the field of human-robot interaction needs a distributed and socially situated understanding of reminding and scheduling practices in the design of robots to meet the needs of people with cognitive disabilities. The results are based on an interaction analysis of video recorded workshop interactions during a co-creation process in which the participants tested a reminder-robot prototype that was designed for and with people with acquired brain injury.
It is not always easy for humans to understand the pain of others. Therefore, having an effective method to express and explain their pain to others is valuable. In this study, we attempt to create a robot that is capable of expressing human pains by its deformations. The user of the robot is able to edit those deformations manually by his/her hand, and the robot is capable of recording the deformation processes for replays. In this late-breaking report, we describe the basic requirements of this robot and its early prototype.
Using speech as an effective communication modality in human-robot interaction (HRI) allows designers to implement conceptual metaphors as a linguistic device. Metaphors appear to instantiate psychological framing effects and can influence the user´s reasoning significantly. The present paper presents two consecutive studies that explore how metaphors can be used in speech output for social robots to deliberately influence the user's perception of their relationship with the robot. As a first step, metaphoric expressions for framing cooperation or delegation in HRI settings were identified in a workshop. Effects of those metaphors on people´s strategies for organizing tasks in human-robot collaboration were then evaluated in an online survey. We found that metaphors suitable for encouraging users to do a task together with a robot are metaphors like "partners on a journey" or "allies". In contrast, metaphors that can be used to encourage handing over the task to a robot are metaphors like "boss" or "ruler". Further research needs to test whether these metaphors also have an effect on users' actions in HRI settings.
Research focused upon Child-Robot Interaction shows that robots in the classroom can support diverse learning goals amongst pre-school children. However, studies with children and robots in the Global South are currently limited. To address this gap, we conducted a study with children aged 4-8 years at a community school in New Delhi, India, to understand their interaction and experiences with a social robot. The children were asked to teach the English alphabet to a Cozmo robot using flash cards. Preliminary findings suggest that the children orient to the robot in a variety of ways including as a toy or pet. These orientations need to be explored further within the context of the Global South.
This study aims to test the viability of using social robots for eliciting rich disclosures from humans to identify their needs and emotional states. Self-disclosure has been studied in the psychological literature in many ways, addressing both peoples' subjective perceptions of their disclosures, as well as objective disclosures evaluating these via direct observation and analysis of verbal and written output. Here we are interested in how people disclose (non-sensitive) personal information to robots, in an aim to further understand the differences between one's subjective perceptions of disclosure compared to evidence of disclosure from the shared content. An experimental design is suggested for evaluating disclosure to social robots compared to humans and conversational agents. Initial results suggest that while people perceive they disclose more to humans than to humanoid social robots or conversational agents, no actual observed differences in the content of the disclosure emerges between the three agents.
Joint attention has been identified as a critical component of successful human machine teams. Teaching robots to develop awareness of human cues is an important first step towards attaining and maintaining joint attention. We present a joint attention estimator that creates many possible candidates for joint attention and chooses the most likely object based on a human teammate's hand cues. Our system works within natural human interaction time (< 3 seconds) and above 80% accuracy. Our joint attention estimator provides a meaningful step towards ensuring robots enable human social skills for successful human machine teaming.
We investigated how voice and motion from a small humanoid robot affect an autistic child's re-engagement of attention. Results suggest that a robot can use motion to re-engage the attention of an autistic child and that two adjoining multimodal cues are more effective than a single unimodal cue.
We quantitatively analyze and compare how teachers in Serbia and the UK use physical contact to guide autistic children through an activity with and without a robot. We annotated 40 videos from the DE-ENIGMA dataset of autistic children interacting with or without a robot in the presence of an adult teacher in Serbia or the UK. Results show touch was widely used in both countries and more when the robot was not present. Culture affected where touch occurred, while the robot affected touch style.
Toward empathetic and harmonious human-robot interaction (HRI), automatic estimation of human emotion has attracted increasing attention from multidisciplinary research fields. In this report, we propose an attention-based multimodal fusion approach that explores the space between traditional early and late fusion approaches, to deal with the problem of asynchronous multimodal inputs while considering their relatedness. The proposed approach enables the robot to align the human's visual and speech signals (more specifically, facial, acoustic, and lexical information) extracted by its cameras, microphones, and processing modules and is expected to achieve robust estimation performance in real-world HRI.
In human-robot collaboration settings, each agent often has access to private information (PI) that is unavailable to others. Examples include task preferences, objectives, and beliefs. Here, we focus on the human-robot dyadic scenarios where the human has private information, but is unable to directly convey it to the robot. We present Q-Network with Private Information and Cooperation (Q-PICo), a method for training robots that can interactively assist humans with PI. In contrast to existing approaches, we explicitly model PI prediction, leading to a more interpretable network architecture. We also contribute Juiced, an environment inspired by the popular video gameOvercooked, to test Q-PICo and other related methods for human-robot collaboration. Our initial experiments in Juiced show that the agents trained with Q-PICo can accurately predict PI and exhibit collaborative behavior.
Robots that tell children stories are becoming common. Given that the practice of parent-child storytelling is part of family culture, it is critical to investigate parental acceptance of storytelling robots. Drawing on technology acceptance models, the theory of planned behavior, and Bowen family systems theory, we conducted a mixed-methods study involving an online survey of 115 respondents and 18 in-person interviews. We aimed to propose a model of parental acceptance of storytelling robots contextualized in potential use case scenarios. Preliminary findings indicate an overall positive attitude towards children's storytelling robots and identify factors that can affect parental acceptance of these robots. This study may inform the design of storytelling robots tailored to the needs of parents and their children in the home.
Telecommunication software is often used to tackle social isolation in those with restricted mobility, but lacks high-level interactions found in face-to-face conversation. This study investigated the use of an immersive control system for a robotic surrogate when compared to Skype. There was no significant difference between the presence and social presence felt between the two systems; however, Skype was found to be significantly easier to use. Future work will focus on identifying user requirements and further developing the control system.
Socially assistive robots might improve the quality of life of individuals by carrying out therapeutic interventions. However, when users try to cheat with robots by disregarding their recommendations, they might not be able to perform their supporting functions. In the present study, we aimed to evaluate how the robot behavior style could affect the users' compliance and cheating behavior. Sixty volunteers underwent neuro-psychological testing administered by Pepper that was configured as neutral, friendly, or authoritarian. The results revealed that the robot characterized by neutral behavioral style seems to reduce individuals' compliance.
Congruence between the visual appearance of a robot and its behavioral and communicative characteristics has been shown to be a crucial determinant of user acceptance. Given the growing popularity of speech interfaces, a coherent design of a robot's looks and its voice is becoming more important. Which robot voice fits which appearance, however, has hardly been investigated to date. This is where the present research comes in. A randomized lab experiment was conducted, in which 165 participants listened to one of five more or less humanlike robot voices and subsequently drew a sketch corresponding to their imagination of the robot. The sketches were analyzed regarding the presence of various body features. While some features appeared in almost all drawings regardless of the condition (e.g., head, eyes), other features were significantly more prevalent in voice conditions characterized by low human-likeness (wheels) or high human-likeness (e.g., nose). Our results give first hints on which embodiment users might expect from different robotic voices.
Non-verbal communication that encompasses emotional body language is a crucial aspect of social robotics applications. Deep learning models for the generation of robotic expressions of bodily affect gain more and more ground recently over the hand-coded methods. In this work, we present a Conditional Variational Autoencoder network that generates emotional body language animations of targeted valence and arousal for a Pepper robot, and we conduct a user study to evaluate the interpretability of the generated animations.
Here, we present a novel experiment on the yet unstudied phenomenon of ASC adults' responses to interruption, using a robot role-play clerical task. Using an IQ, gender, and task parameter matched NT control sample, we found that adults with ASC experience marginally less task disruption from a robot interrupter comparatively to a human. We surmise that robot-assisted therapy for adults with the condition is a potential research avenue worth further exploration.
Declining cognitive abilities can have a tremendous impact on one's ability to age healthily and maintain a high quality of life. Cognitive training has been shown to improve neural plasticity and increase cognitive reserve, reducing the risk of dementia. Specifically, learning to play the piano has been shown to be an engaging, multimodal form of cognitive training. Socially assistive robots (SAR) present a unique opportunity to increase access to user-tailored piano learning cognitive training. In this report we present a four-week robot lead piano lesson feasibility intervention for older adults with mild cognitive impairment. Specifically, engaging with the SAR improved cognitive function across multiple domains, and participants found the SAR a highly competent instructor.
In this paper, we present a social robotics framework to assist workers with intellectual and developmental disabilities (IDD). AIDA, which stands for artificially intelligent disability assistant, will help workers with IDD through social scaffolding techniques. For the experiments, we simulated disabilities with our participants and evaluated the impact of social scaffolding with the Pepper humanoid robot. The results show that stronger forms of scaffolding are required to provide more effective assistance workers with IDD.
Age-related cognitive impairment is becoming a more prevalent concern as the elderly population continues to increase. Technological systems created for cognitive rehabilitation need to be motivating to combat the personal and logistic factors that make it difficult for them to remain engaged [4]. In this paper, we present a pilot study with a socially assistive robot-facilitated memory game that employs sensory feedback (audio, haptic, and both) to explore the design considerations with adults. The ultimate aim is to inform the design of a cognitive rehabilitation system for individuals with age-related cognitive decline. The preliminary results suggest a preference for auditory feedback, and participants believed they performed best in this condition. Based on qualitative feedback, we have identified improvements that can be made to the system to enhance engagement.
The Darwin OP2 robot was used to play a cooperative game of concentration with a human. Guided calibration was used to teach the robot colors in different environments. Vision algorithms and servo movements were used to identify the elements of the game. The effect of additional body language, which was not necessary for the game, was studied in reference to the way the robot was perceived by human partners.
In this report, we propose a framework for generating long-term human-like motion based on a deep generative model. Thanks to the network structure, the proposed method allows us generating seem- less long-term motions while the model is trained by 4 seconds long short motion samples. The computer graphics of generated motions seem to be reproduced scenes where a pair of persons talking to each other.
A drone agent can benefit from exhibiting social cues, as introducing behavioral cues in robotic agents can enhance interaction trust and comfort with users. In this work, we introduce the development and setup of a responsive eye prototype (DroEye) mounted on a drone to demonstrate prominent social cues in Human-Drone Interaction. We describe possible attributes associated with the DroEye prototype and our future research directions to enhance the overall experience with social drones in our environment.
In this study, we investigated robot behaviors in a shopping mall that can make passersby stop in front of the robot. This is the first step to develop a social robot for advertising. The three types of robot behavior: Greeting, Troubling, and Dancing were implemented into the robot. The result by 65000+ passersby shows that Troubling motion can make passersby stop more and stay longer.
Robots will soon deliver food and beverages in various environments. These robots will need to communicate their intention efficiently; for example, they should indicate who they are addressing. We conducted a real-world study of a water serving robot at a university cafeteria. The robot was operated in a Wizard-of-Oz manner. It approached and offered water to students having their lunch. Our analyses of the relationship between robot gaze direction and the likelihood that someone takes a drink show that if people do not already have a drink and the interaction is not dominated by an overly enthusiastic user, the robot's gaze behavior is effective in selecting an interaction partner even "in the wild".
This study proposes a novel imitation learning approach for the stochastic generation of human-like rhythmic wave gestures and their modulation for effective non-verbal communication through a probabilistic formulation using joint angle data from human demonstrations. This is achieved by learning and modulating the overall expression characteristics of the gesture (e.g., arm posture, waving frequency and amplitude) in the frequency domain. The method was evaluated on simulated robot experiments involving a robot with a manipulator of 6 degrees of freedom. The results show that the method provides efficient encoding and modulation of rhythmic movements and ensures variability in their execution.
For social robots to be deployed in an interaction-centred environment such as a hospital, entities need to understand how to design human-robot interactions for the technology to be adopted successfully. For instance, if the robot was to be placed at a hospital concierge, it would need to engage visitors similar to a human concierge, either through verbal or non-verbal gestures (social cues). In this study, we investigate two hypotheses. Firstly, we hypothesized that various attention-drawing social cues significantly correlate to the receptivity of the robot. Secondly, we hypothesized that humans preferred medium of information transfer is through verbal interaction. We set up a humanoid concierge robot, Cruzr, in a hospital for 5 days as a trial. Our findings indicate an increase in receptivity when Cruzr performed a social cue as compared to neutral mode and that the preferred mode of communication was touch over voice.
Detecting lies in a real-world scenario is an important skill for a humanoid robot that aims to act as a teacher, a therapist, or a caregiver. In these contexts, it is essential to detect lies while preserving the pleasantness of the social interaction and the informality of the relation. This study investigates whether pupil dilation related to an increase in cognitive load can be used to swiftly identify a lie in an entertaining scenario. The iCub humanoid robot plays the role of a magician in a card game, telling which card the human partner is lying about. The results show a greater pupil dilation in presence of a false statement even if in front of a robot and without the need of a strictly controlled scenario. We developed a heuristic method (accuracy of 71.4% against 16.6% chance level) and a random forest classifier (precision and recall of 83.3%) to detect the false statement. Additionally, the current work suggests a potential method to assess the lying strategy of the partner.
Entertainment applications for robots are often based on voice interaction. In these scenarios, the robot is primarily used as a voice assistant with a physical body. We conducted a study to investigate whether the entertainment value of a robot during a game can be attributed to the experience of voice interaction alone, or whether it is related to the physical presence of a robot and its expressivity through gestures and movement. The study examined the user experience for three different set-ups of a quiz game (with voice assistant, robot without animation, and animated robot). The results indicate that the perceived hedonic quality increases with physical presence and movement of the robot. Pragmatic quality was not rated differently for the three game versions. Additional qualitative interviews suggest that it might be desirable to design not overly expressive, but meaningful robot behavior for entertainment applications, in order to promote both, hedonic and pragmatic experience.
Human-Robot Interaction often involves a robot assisting or providing feedback to a human partner's performance or cooperating to complete a task. In such an interaction scenario, the robotic system requires to perceive the human teammate's cognitive state that might affect task performance. In this pilot study, the focus is on developing a framework that assesses the human's cognitive performance for human-robot synergetic task, such as an assembly task. Specifically, we explore the correlation between a person's quality of sleep and performance metric through a standard task for cognitive assessment, the N-back task. To validate our hypothesis, we conducted a study with 25 participants, and our results indicate that there is a moderate correlation between some stages of sleep cycle and performance. Additionally, we present a possible Human-Robot Interaction setup that could benefit from our results.
Social robots are increasingly being used as a mediator between therapists and children with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). This paper describes an ongoing work that aims to target children with diverse forms of ASD that undergo long-term interventions with the robot. Additionally, this paper describes a novel behavior that was implemented to introduce and to practice a set of social behaviors used for greetings and non-verbal communications. We conducted a long-term study with a cohort of 15 children aged from 3 to 12 years old and this paper presents the preliminary results.
Manual guidance of a collaborative robot arm (i.e., moving the robot arm by hand) is a technique to program the robot without coding and in the context of the job. However, it requires the robot to be available and not in operation. As a workaround, we propose performing manual guidance on a hologram of that same robot, the feasibility of which we are investigating. A potential limitation of this approach is the lack of tangibility of the hologram, for which we are investigating the contribution that mid-air haptics (MAH) can make. Early results suggest beneficial effects from tactile feedback on AR pick-and-place.
We explore the viability of developing a robot to assist cognitive stimulation activities for early-stage dementia users within a day-care centre. Working with healthcare professionals, we have designed interactive sessions where the robot Pepper plays stimulation games according to three levels of cognitive complexity. The initial design has been tested with an end-user to evaluate the system as well as to gain insight on the necessary interaction cues that the robot should perform to actively engage potential users.
In three online studies we examine which social context factors determine whether a robot should (not) deliver a message/reminder to its user using three scenarios: a smart kitchen (n = 101), a smart living room (n = 96) and a smart office (n = 96). In addition, we varied the nature of the message (urgent/non-urgent; sensitive/non-sensitive). In this late braking report, we present some preliminary insights on the nature of the situations and message types for which participants would prefer the message to be delivered.
Current physical rehabilitation techniques can be boring and frustrating for those that need them, especially when they are carried out alone over the long-term. Individual, repetitive exercises are also carried out by high performance athletes in sports such as squash. By observing the motivational behaviours used by professional squash coaches, we have analysed coaching styles which will help to inform the design of an autonomous robotic coach capable of increasing adherence to a long-term sports or rehabilitation exercise program.
In this paper, we address the question of how specific barman/robot interaction styles can affect users' engagement. To this extent, we implemented a barman-robot called "BRILLO" with neutral, entertaining, and emphatic behavioral style suggesting drinks and taking orders from customers. Results show that a robot's interaction style may determine users' level of engagement. Indeed, interacting with an emphatic robot that modulates its behavior according to the user's one is more effective than a neutral robot in improving engagement and positive emotions in public-service contexts. Moreover, users experienced more positive emotions when they perceived BRILLO as safe and as more similar to a human.
People develop mental models of robots to improve their interactions with them, but predictions from these models are not always accurate. Robots often fail to communicate their capabilities, especially perceptual capabilities---i.e., what they can sense and understand about the world. This paper describes ongoing preliminary work [8] towards enabling robots to autonomously estimate and influence human beliefs about robot perceptual capabilities. A custom-designed, web-based game is being used to establish feasibility and guide ongoing work. Our approach is discussed along with results from a pilot study.
Telepresence robots could help homebound students to be physically embodied and socially connected in the classroom. However, most telepresence robots do not provide their operators with information about whether their speaking volume is appropriate in the remote context. We are investigating how operators could benefit from live feedback about speaking volume appropriateness as part of our ongoing research on using remote presence robots to improve education and social connectedness for students experiencing extended absences from school. This preliminary report describes (1) the development of a model of speaking volume appropriateness to provide this feedback, (2) implementation of a feedback element in the operator's user interface, and (3) plans for long-term deployment to assess impacts on the social and educational experience of homebound high school students.
Robot teleoperation is crucial for many hazardous situations such as handling radioactive materials, undersea exploration and firefighting. Visual feedback is essential to increase the operator's situation awareness and thus accurately teleoperate a robot. In addition, the control interface is equally important as the visual feedback for effective teleoperation. In this paper, we propose a simple and cost-effective orthographic visual interface for the teleoperation system by visualizing the remote environment to provide depth information using only a single inexpensive webcam. Further, we provide a simple modification to the control interface (Leap Motion) to achieve a wider workspace and make it more convenient for the user. To realize the merits of the proposed system, a comparison between the modified Leap Motion interface and traditional control modalities (i.e., joystick and keyboard) is conducted using both the proposed orthographic vision system and a traditional binocular vision system. We conduct a user study (N = 10) to evaluate the effectiveness of this approach to teleoperate a 6-DoF arm robot to carry out a pick and place task. The results show that the combination of Leap Motion with the orthographic visual system outperforms all other combinations.
This article presents the progress in building a new dataset of 'unexpected daily situations' (like someone tripping on a box, while carrying a tray to the kitchen, or someone burning him/herself with hot water and dropping a mug). Each of the situations involve one or two humans in a familiar, structured environment (eg, a kitchen, a living room) with rich semantics. Correctly interpreting the situation (including recognising an error, undesired effect or incongruity when it occurs, as well as selecting the best repair action) requires beyond-state-of-art spatio-temporal, semantic and socio-cognitive modelling. As such, the aim of the dataset is to offer (i) a realistic source of data to train and test such novel algorithms and (ii) provide a new benchmark against which algorithms can be demonstrated.
Robots increasingly become part of our lives. How we perceive and predict their behavior has been an important issue in HRI. To address this issue, we adapted a well-established prediction paradigm from cognitive science for HRI. Participants listened a greeting phrase that sounds either human-like or robotic. They indicated whether the voice belongs to a human or a robot as fast as possible with a key press. Each voice was preceded with a human or robot image (a human-like robot or a mechanical robot) to cue the participant about the upcoming voice. The image was either congruent or incongruent with the sound stimulus. Our findings show that people reacted faster to robotic sounds in congruent trials than incongruent trials, suggesting the role of predictive processes in robot perception. In sum, our study provides insights about how robots should be designed, and suggests that designing robots that do not violate our expectations may result in a more efficient interaction between humans and robots.
In this paper, we propose a model to control an agent's gaze behavior called Interactive Kinetics-Based Gaze Direction Model(iK-Gaze). iK-Gaze is a model used in one-to-one interaction between an agent and a human, where the agent's gaze direction is calculated using an energy function which uses the human's gaze direction as its input. iK-Gaze aims to generate the agent's gaze through interactions with a human rather than using predefined motions and predefined timings. Contrary to rule-based or statistics-based models, the agent's gaze motion changes dynamically according to the human's gaze motion. Moreover, by utilizing three parameters of the desire to look, the mutual gaze hesitation, and the mutual gaze stress in the iK-Gaze, the tendency of the agent's gaze behavior can be changed easily. From the case study, the fact that agent's gaze behavior changes according to the human gaze behavior is confirmed.
Social robots may be beneficial to educators working with autistic children in helping to monitor the children's progress. To identify needs for measuring and tracking progress of autistic children, we conducted interviews with nine experienced educators in Serbia and the Netherlands who work with autistic children. Responses revealed educators' needs to identify antecedents of notable child behaviour, to have standardised measures of social skills, and to understand child behaviour across settings. We present initial design concepts how social robots could be utilised to meet these needs.
People are starting to interact with robots in a range of everyday contexts including hospitals, shopping centers, and airports. When faced with a robot, people with little or no prior experience necessarily build expectations based on the robot's superficial appearances and actions, mediated by any potential tangentially related experience (e.g., media depictions). However, the person's constructed expectations (e.g., that a humanoid robot can hold a conversation) does not necessarily relate to actual robot capability, resulting in an expectation discrepancy. This can create disappointment, when the person notices the limited capability, or misplaced trust, if the person believes a robot is more capable than it is. In this paper we present an initial framework for describing and discussing expectation discrepancy.
Social robots have been shown to help in language education for children. This can be good aid for immigrant children that need additional help to learn a second language their parents do not understand to attend school. We present the setup for a long-term study that is being carried out in blinded to aid immigrant children with poor skills in the Norwegian language to improve their vocabulary. This includes additional tools to help parents follow along and provide additional help at home.
We present a neural network based system capable of learning a multimodal representation of images and words. This representation allows for bidirectional grounding of the meaning of words and the visual attributes that they represent, such as colour, size and object name. We also present a new dataset captured specifically for this task.
In this paper, we report an android platform with a masculine appearance. In the human-human interaction research field, several studies reported the effects of gender in the social touch context. However, in the human-robot interaction research field, gender effects are mainly focused on human genders, i.e., a robot's perceived gender is less focused. The purpose of developing the android is to investigate gender effects in social touch in the context of the human-robot interaction, comparing to existing android platforms with feminine appearances. For this purpose, we prepared a nonexistent face design in order to avoid appearance effects and fabric-based capacitance type upper-body touch sensors.
Communication among older adults in a care home is often reduced due to cognitive, communication, and mobility impairments. They tend to become isolated, which may lead to faster cognitive decline. We present an approach in which a robot is used as a communication vehicle between people located in adjacent rooms. To program the robot, we resorted to physical blocks with 3D icons. Older adults are able to create a sequence of messages that the robot delivers to another person or group. The blocks represented userrecorded voice messages, pre-recorded messages (e.g., proverbs), or actions (e.g., delivering cookies). A preliminary study with 22 older adults in a care home showed positive engagements between groups and an overall sense of excitement and fun. Carrier robots promise to extend the action range and operate as a communication tool, enabling interactions between people who may not be able to interact whenever they feel compelled to.
We present an early discussion on how people might interact with a rogue autonomous vehicle (AV). A rogue autonomous vehicle is one that may behave in unpredictable and dangerous ways because of malfunctioning sensors, vehicle tampering, or malicious hacking. Rogue AVs present a danger both to passengers' and nearby pedestrians' safety. To address this challenge we conducted a preliminary design study and gathered and analyzed design ideas that highlight how people envision they could interact with rogue AVs by both identifying and reacting to them. Our initial results highlight design concepts such as redundant sensors, tiered responses, audio and visual cues, and ways to obtain trusted confirmation. We conclude with a discussion on future steps for designing for interactions with rogue AVs.
Handshakes are fundamental and common greeting and parting gestures among humans. They are important in shaping first impressions as people tend to associate character traits with a person's handshake. To widen the social acceptability of robots and make a lasting first impression, a good handshaking ability is an important skill for social robots. Therefore, to test the human-likeness of a robot handshake, we propose an initial Turing-like test, primarily for the hardware interface to future AI agents. We evaluate the test on an android robot's hand to determine if it can pass for a human hand. This is an important aspect of Turing tests for motor intelligence where humans have to interact with a physical device rather than a virtual one. We also propose some modifications to the definition of a Turing test for such scenarios taking into account that a human needs to interact with a physical medium.
A key area in human-robot interaction research is the use of a robot to collect participant research data. However, traditional website-based data collection methods do not intergrate with the robot providing the interaction. This leaves a clear disconnect between the static delivery of a digital questionnaire and a lack context-relevant behaviours from the robot. In this paper, we present a HTML/Javascript software system to create a direct link between digital data collection and the robot that delivers it. In doing so, this system can be used to create more dynamic data collection sessions using the robot's speech, movement or presence to support the delivery of questionnaires. This system can also be used to create interactive sessions to support experimentation. We present two proposed use-case scenarios built with the system with the Pepper humanoid robot in mental health and well-being services. We present this software system to help speed up development time for user studies, lower the entry barrier for non-technical researchers who want to use social robots for data collection, and to create more systematic data collection methods for robots. Future work of the software includes increasing the repertoire of questionnaire items available to allow for more sophisticated data collection.
This study explores the effect of social robot's use of ice-breaking humor on likeability and future contact intentions. Result from a laboratory experiment showed that jokes used at greetings and topic transition were effective in enhancing likeability and reducing awkwardness, while the intention to use the robot for a longer period was not affected by the use of humor. We suggest to include social jokes when designing the first encounter with robot, as long as the jokes do not interfere with task performance. This work may be helpful in deciding whether to insert humor and how to dispose humor in robot's speech.
In a future where many robot assistants support human endeavors, interactions with multiple robots either simultaneously or sequentially will occur. This paper highlights an initial exploration into one type of sequential interaction, which we call "transfers'' between multiple service robots. We defined the act of transferring between service robots and further decomposed it into five stages. Our research was informed by a design workshop investigating usage of multiple service robots. We also identified open design and research questions on this topic.
Competent collaboration between robots and people in the open world requires sensing and reasoning about transitions of people's attention to the robots themselves, as well as to other people and objects in the environment. We present challenges and opportunities with designing extended attentional capabilities for interactive systems, including the need to track, reason about, and manage the attentional foci of all actors. We describe work in progress to leverage such attentional capabilities for interaction management with a prototype situated robotic system.
This paper describes a work in progress that aims to design automatic engagement recognition of Robot-Mediated Therapy (RMT) tailored for children with a diverse form of Autism Spectrum Disorder (ASD) in co-occurrence with Attention Deficit Hyperactivity Disorder (ADHD) or Delayed Speech Development (DSD). To this end, we utilized videos obtained from RMT sessions of 36 children aged 4-12 years old. The videos were pre-processed to be used for machine learning approaches. Also, by using the OpenPose tool the datapoints of the facial features and body joints were extracted. Based on these results, automatic engagement recognition model is going to be designed.
Studies of mental state attribution to robots usually rely on verbal measures. However, verbal measures are sensitive to people's rationalizations, and the outcomes of such measures are not always reflected in a person's behavior. In light of these limitations, we present the first steps toward developing an alternative, non-verbal measure of belief attribution to robots. We report preliminary findings from a comparative study indicating that the two types of measures (verbal vs. non-verbal) are not always consistent. Notably, the divergence between the two measures was larger when the task of inferring the robot's belief was more difficult.
In many types of human-robot interactions, people must track the beliefs of robots based on uncertain estimates of robots' perceptual and cognitive capabilities. Did the robot see what happened and did it understand what it saw? In this paper, we present preliminary experimental evidence that people estimating what a humanoid robot knows or believes about the environment anthropocentrically assume it to have human-like perceptual and cognitive capabilities. However, our results also suggest that people are able to adjust their incorrect assumptions based on observations of the robot.
People's mental models of robots affect their predictions of robot behavior in interactions. The present study highlights some of the uncertainties that enter specifically into people's considerations about the minds and behavior of robots by exploring how people fare in the standard "Sally-Anne" false-belief task from developmental psychology when the protagonist is a robot.
Recent developments in robotics are potentially changing the nature of service, and research in human-robot interaction has previously shown that humanoid robots could possibly work in public spaces. We conducted an ethnographic study with the humanoid robot Pepper at a central train station. The results indicate that people are not yet accustomed to talking to robots, and people seem to expect that the robot does not talk, that it is a queue ticket machine, or that, one should interact with it by using the tablet on the robot's chest.
State of the art in speech synthesis considerably reduced the gap between synthetic and human speech on the perception level. However the impact of a speech style control on the perception is not well known. In this paper, we propose a method to analyze the impact of controlling the TTS system parameters on the perception of the generated sentence. This is done through a visualization and analysis of listening test results. For this, we train a speech synthesis system with different discrete categories of speech styles. Each style is encoded using a one-hot representation in the network. After training, we interpolate between the vectors representing each style. A perception test showed that despite being trained with only discrete categories of data, the network is capable of generating intermediate intensity levels between neutral and a given speech style.
Social robots offer versatile possibilities to engage children in social interaction and are increasingly implemented in educational contexts. However, how children enter into social interaction with an interlocutor is substantially influenced by their temperament. In this paper, we present preliminary findings of a long-term study on child-robot interaction for language learning, in which we focused on children's positive and negative expressions of shyness within the interaction over multiple sessions with the robot. We found that shy children initially displayed significantly fewer positive signals. However, in the long term, shy children seem to be able to overcome their restrained behavior in interaction with the robot.
The paper presents an ongoing work that aims for real-time action recognition specifically tailored for child-centered research. To this end, we collected and annotated a dataset of 200 primary school children aged 6 to 11 years old. Each child was asked to perform seven actions: boxing, waving, clapping, running, jogging, walking towards the camera, and walking from side to side. Two camera perspectives are provided, with a top view in RGB format and a frontal view in both RGB and RGB-D formats. Body keypoints (skeleton data) are extracted using OpenPose and OpenNI tools. The results of this work are expected to bridge the performance gap between activity recognition systems for adults and children.
Attention is an important mechanism for solving certain tasks, but our environment can distract us via irrelevant information. As robots increasingly become part of our lives, one important question is whether they could distract us as much as humans do, and if so to what extent. To address this question, we conducted a study in which subjects were engaged in a central letter detection task. The task irrelevant distractors were pictures of three agents; a mechanical robot, a human-like robot, and a real human. We also manipulated the perceptual load to investigate whether the demands of the task influence how much these agents distract us. Our results show that robots distract people as much as humans, as demonstrated by significant increase in reaction times and decrease in task accuracy in the presence of agent distractors as compared to the situation when there was no distractor. However, we found that the task difficulty interacted with the human-likeness of the distractor agent. When the task was less demanding, the agent that distracted most was the most human-like agent, whereas when the task was more demanding, the least human-like agent distracted the most. These results not only provide insights about how to design humanoid robots but also sets as a great example of a fruitful collaboration between human-robot interaction and cognitive sciences.
Robots have been found to be effective tools for programming instruction, although it is not yet clear why students learn more using robots as compared to receiving 'traditional' programming instruction. In this study, 121 nine- to twelve-year-old children received a programming training in pairs, in one of two conditions: using either a robot or a virtual avatar. The training was videotaped to study differences in children's cooperation. Furthermore, children's learning outcomes and motivation were assessed through questionnaires. Children were found to learn more from programming the robot than the avatar, although no differences in their cooperation during the training or self-reported motivation were found between the two conditions. Thus, future research is required to further understand how exactly robots lead to higher learning outcomes than 'traditional' tools.
Musculoskeletal Disorders (MSDs) are common occupational diseases. An interesting research question is whether collaborative robots actively can minimise the risk of MSDs during collaboration. In this work ergonomic adaptation of robotic movements during human-robot collaboration is explored in a first test case, namely, adjustment of work sureface height. Vision based markerless posture estimation is used as input in combination with ergonomic assessment methods to adapt robotic movements in order to facilitate better ergonomic conditions for the human worker.
We examine the extent to which task engagement, social engagement, and social attitude in child-robot interaction can be predicted on the basis of Facial Action Unit (FAU) intensity. The analyses were based on child-robot and child-child interaction data from the PInSoRo dataset [1]. We applied Logistic Regression, Naive Bayes, and Probabilistic Neural Networks to these data. Results indicated that FAU intensities have potential to predict social dynamics in child-robot interactions (average balanced accuracy scores up to 84%), and illustrate a difference in behavior of children towards other children when compared to their interaction with robots.
This work describes an incidental finding from a longitudinal Human-Robot Interaction study that was investigating whether a robot showing emotions during interactions with older adults was perceived differently than to a robot that did not display emotions during the interaction. During this study we noted that some older adults found it hard to understand what the robot was saying, regardless of the volume of speech generated by the robot. The fact that they did not have problems in understanding the researcher led us to investigating this accessibility-related issue in more depth. This paper describes the implications of this finding and recommendations on how to approach future work.
This research investigates whether there is an ethical concern in robots misrepresenting their internal state through speech. Participants were asked to discuss their food preferences with a robot, where the robot would either respond through facts or an implied personal stance. Results show that there are no significant differences in the way participants perceived the robot or accepted the interaction; nor that the interaction influenced their mood. This indicates that the use of personal opinion by a robot does not significantly impact participants' opinion of the robot and therefore may not necessarily be a concern in human-robot interactions.
This paper presents the initial efforts towards developing a robotic limb repositioning system. We aim to combine programming by demonstration and end-user programming in a tele-manipulation system that includes the user in the loop. We propose an approach based on a general-purpose mobile manipulator and a web-based interface where a user can select, edit, preview and execute different repositioning exercises based on the selected limb. This approach shows the potential to empower people who have mobility impairments to be more involved in an activity of daily living.
As a first step into utilizing Virtual Reality (VR) for Human-Robot interaction(HRI) studies we attempted to replicate a study from Kahn et al., which was interested in peoples' secret keeping behaviour with regards to robotic tour guides compared to human tour guides. Some changes had to be made to the original study but the essence of the experiment was maintained. Results suggest that there are many differences in how the participants in this study viewed the various robot behaviours when compared to how participants viewed the guides in the original study. As such no conclusive statements can be made about the overall suitability of VR as a platform for conducting HRI studies.
This abstract presents proposed experimental work to consider what might be required for an 'ethical black box', essentially a robot data recorder, to inform robot accident investigation processes and the implications for HRI.
We present a robot exercise coach, co-designed and then trained in real-time by a human fitness instructor, via interactive machine learning, to support the UK National Health Service (NHS) Couch to 5km (C25K) programme. The programme consists of undertaking 3x weekly exercise sessions for 9 weeks, with sessions building up from a combination of short runs and walks to a full 30 minutes running. The simplicity (and relatively boring nature) of the task places great importance on the ability of the robot, our 'C25K coach', to provide engaging and appropriate social supporting behaviour.
Earlier research has shown that robots can provoke social responses in people, and that robots often elicit compliance. In this paper we discuss three proof of concept studies in which we explore the possibility of robots being hacked and taken over by others with the explicit purpose of using the robot's social capabilities. Three scenarios are explored: gaining access to secured areas, extracting sensitive and personal information, and convincing people to take unsafe action. We find that people are willing to do these tasks, and that social robots tend to be trusted, even in situations that would normally cause suspicion.
This late-breaking report presents a method for learning sequential and temporal mapping between music and dance via the Sequence-to-Sequence (Seq2Seq) architecture. In this study, the Seq2Seq model comprises two parts: the encoder for processing the music inputs and the decoder for generating the output motion vectors. This model has the ability to accept music features and motion inputs from the user for human-robot interactive learning sessions, which outputs the motion patterns that teach the corrective movements to follow the moves from the expert dancer. Three different types of Seq2Seq models are compared in the results and applied to a simulation platform. This model will be applied in social interaction scenarios with children with autism spectrum disorder (ASD).
This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot's different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer. We calculated the emotion recognition rate and the average confidence level of their answers. As a result, we found that participants gave higher confidence levels in their assessment for the robot's emotional movement with vertical oscillation than one without it.
Human emotion detection is an important aspect in social robotics and in human-robot interaction (HRI). In this paper, we propose a vision-based multimodal emotion recognition method based on gait data and facial thermal images designed for social robots. Our method can detect four human emotional states (i.e., neutral, happiness, anger, and sadness). We gathered data from 25 participants in order to build-up an emotion database for training and testing our classification models. We implemented and tested several approaches such as Convolutional Neural Network (CNN), Hidden Markov Model (HMM), Support Vector Machine (SVM), and Random Forest (RF). These were trained and tested in order to compare the emotion recognition ability and to find the best approach. We designed a hybrid model with both the gait and the thermal data and the accuracy of our system shows an improvement of 10% over the other models based on our emotion database. This is a promising approach to be explored in a real-time human-robot interaction scenario.
Androids--humanoids with a human-like appearance are studied in limited communities but draw considerable attention. Considering an android integrates numbers of actuators, sensors and devices into a system, a standardized framework is beneficial to easy-move and extension of potential applications. In this report, we developed a ROS-based software framework for androids. With the framework, intuitively generating, blending and reusing the android motion is aimed to be realized. Some software applications are developed under this framework and they are implemented on a child-like android "ibuki''. Several android behaviors are created by implementing this framework on ibuki.
Operators of military Unmanned Aerial Vehicles (UAVs) work in dynamic environments, where they must use shared command and control (C2) maps to orient, plan and perform their work. The map is overloaded with information that is irrelevant to their immediate operational mission. This clutter may harm their situation awareness (SA) and increase workload. An intelligent and dynamic filter algorithm has been developed to reduce the clutter by filtering information items on the map based on the environmental context. Implementing it raises questions regarding the update rate of the map filter. Two update rates were tested and their effect on UAV operators' workload and SA was examined empirically. Operators benefited from higher update rates in terms of SA and workload. This is an important step towards the development of the algorithm, which conceptualizes how intelligent algorithms can be used to improve human operators' interaction with autonomous systems.
This paper presents an ongoing work that aims to provide a quantitative analysis of a large clinical study that was conducted with 21 children aged 4-8 years old. Children were diagnosed with various forms of Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD) or Delayed Speech Development (DSD) with autistic traits. Each child participated in four to six Robot-Assisted Therapy (RPT) sessions that lasted for fifteen minutes each. We manually video-coded the videos from the sessions to find behavioral patterns, engagement and valence scores, and we are now in the process of statistical data analysis to understand whether children's exposure to a robot had a significant effect over time.
We describe the design and evaluation of a humanoid robot that explains inherited breast cancer genetics, and motivates women to obtain cancer genetic testing. The counseling dialogue is modeled after a human cancer genetic counselor, extended with data visualizations and nonverbal behavior. In a quasi-experimental pilot study, we demonstrated that interaction with the robot leads to significant increases in cancer genetics knowledge.
This project is an attempt to investigate the efficacy of a Robotic Arm as a paradigm-changing teaching aid in a graphomotorics therapy environment. The project investigates if a collaboration between human teacher and a robotic arm can significantly improve the quality of a teaching session. A series of experiments tested two modes of robotic arm movements: "Human-like" and "Robotic" with two different robots. The experiments featured exercises that improve graphomotorics contributing abilities such as fine motor skills, serial and spatial memory. These exercises, based predominantly on pointing and gesturing by a teacher, were conducted by a robotic arm, and included feedback on successful completion of the task. Results of preliminary experiments showed that in addition to positive interaction between the student and the robot, the basic relationship between student and teacher was also impacted by shifting the balance of power. Further research is required to conclude the optimal human-robot involvement.
Improving surgical training has the potential to reduce medical errors and consequently to save many lives. We briefly present our efforts to improve this training for robot-assisted surgery. In particular, we explore how data collected from expert demonstrations can enhance the training efficiency for novices. Thus far, our results show that combining hand-over-hand training based on experts' motion data with trial and error training can improve the training outcomes in robotic and conventional laparoscopic surgery settings. We briefly describe our current efforts for exploring how gaze-based training methods, based on experts' eye gaze data, can improve the training outcomes as well.
A key challenge of human-robot collaboration is to build systems that balance the usefulness of autonomous robot behaviors with the benefits of direct human control. This balance is especially relevant for assistive manipulation systems, which promise to help people with disabilities more easily control wheelchair-mounted robot arms to accomplish activities of daily living. To provide useful assistance, robots must understand the user's goals and preferences for the task. Our insight is that systems can enhance this understanding by monitoring the user's natural eye gaze behavior, as psychology research has shown that eye gaze is responsive and relevant to the task. In this work, we show how using gaze enhances assistance algorithms. First, we analyze eye gaze behavior during teleoperated robot manipulation and compare it to literature results on by-hand manipulation. Then, we develop a pipeline for combining the raw eye gaze signal with the task context to build a rich signal for learning algorithms. Finally, we propose a novel use of eye gaze in which the robot avoids risky behavior by detecting when the user believes that the robot's behavior has a problem.
Learning from human input has enabled autonomous agents to perform increasingly more complex tasks that are otherwise difficult to carry out automatically. To this end, recent works have studied how robots can incorporate such input - like demonstrations or corrections - into objective functions describing the desired behaviors. While these methods have shown progress in a variety of settings, from semi-autonomous driving, to household robotics, to automated airplane control, they all suffer from the same crucial drawback: they implicitly assume that the person's intentions can always be captured by the robot's hypothesis space. We call attention to the fact that this assumption is often unrealistic, as no model can completely account for every single possible situation ahead of time. When the robot's hypothesis space is misspecified, human input can be unhelpful - or even detrimental - to the way the robot is performing its tasks. Our work tackles this issue by proposing that the robot should first explicitly reason about how well its hypothesis space can explain human inputs, then use that situational confidence to inform how it should incorporate them.
My research concerns group influence and prosocial behavior in a Human-Robot Interaction (HRI) context. My collaborators and I created and ran an experiment (N = 30) to measure if the emotional responses of a group of robots could induce participants to take prosocial action against robot abuse. Participants completed a collaborative block-building task with a confederate, during which the confederate abused one robot after it made mistakes. We measured participants' responses to these events. The results of the study indicate that humans are more likely to prosocially intervene when the bystander robots react in sadness as opposed to when they ignore the abuse. They motivate further research on social influence and group dynamics within HRI.
One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].
The technology of the future will bring an increasing number of robots into our daily life. This has motivated a number of researchers to explore diverse factors to promote social interaction with robots. This PhD project aims at investigating social power dynamics in Human-Robot Interaction. Social power is defined as one's ability to influence others to do something which they would not do otherwise. Different theories classify alternative ways to achieve social power, such as providing rewards, using coercion, or acting as an expert. After conceptualizing social power to allow implementation in social agents, we studied how those power strategies affect persuasion when using robots. Specifically, we attempted to design persuasive robots by creating persuasive strategies inspired from social power.
Strawberries are an important cash crop that are grown worldwide. They are also a labour-intensive crop, with harvesting a particularly labour-intensive task because the fruit needs careful handling. This project investigates collaborative human-robot strawberry harvesting, where interacting with a human potentially increases the adaptability of a robot to work in more complex environments. The project mainly concentrates on two aspects of the problem: the identification of the fruit and the picking of the fruit.
Patients often do not trust their physicians with confidential, private information. They are worried about judgment, and ultimately this leads to poorer health outcomes. Physicians also do not listen to specific groups of people, biasing healthcare decisions. It may, therefore, be helpful to complement or delegate some of a physician's tasks to a robot. People are more willing to disclose private information to robots, which they find unbiased without negative judgment [2]. Robots can ask all relevant questions regardless of sex, gender, or sexual orientation [11]. This proposal explores the use of robotics within medicine, evaluating patient trust and information disclosure, to supplement and promote unbiased healthcare provider decisions. The experiment will employ a physician to conduct 90 patient interviews between three groups (G) using the standardized Brown Interview Checklist, either with (G1) or without (G2) a proxy robot. Patients interviewed by the robot will be split between those aware (G2a) or unaware (G2b) that a physician will be controlling the robot. We hypothesize that using a physical robot will improve information disclosure with less stress, and perhaps even off-load physician workload for more targeted and appropriate healthcare decisions.
Gestural communication is an important aspect of HRI in social, assistance and rehabilitation robotics. Indeed, social synchrony is a key component of interpersonal interactions which affects the interaction at a behavioral level, as well as at a social level. It is therefore paramount for the robot to be able to adapt to its interaction partner, at the risk of experiencing an awkward interaction. Bio-inspired controllers endowed with plasticity mechanisms can be employed in order to make these interactions as natural and enjoyable as possible. Integrating adaptive properties can lead to the emergence of motor coordination and hence to social synchrony. A non-negligible aspect of the work consists in studying humans in HRI to understand human behavior better and design better interactions. On the long term, this could be quite useful for improved robot-assisted motor therapy.
Autonomous systems requiring user supervision or manual control of autonomy are becoming more prevalent in real world deployments. These systems will require transitions of control between autonomous and manual operations with users being required to both take control from and cede control to autonomy. Prior work in Human-Drone Interaction (HDI) has observed or designed for user interaction with a perfectly functioning robot, but has not looked at interactions with a robot that is about to fail. In this paper we describe results from initial work on characterizing user responses to failures in aerial autonomous systems. Ongoing and future work involves evaluating user proficiency in system operation and its impact on HDI with semi-autonomous systems. This work is novel in the context of small Unmanned Aerial Vehicles (sUAVs) and will inform sUAV autonomy designers for systems with a range of user training from search and rescue to hobbyist users through recommendations for training, necessary timelines for information sharing, and failure planning or contingency options in HDI.
Trust is an integral part of almost any human-robot interaction (HRI). The most technologically advanced robot can sit unused if a human interactant does not trust it. Conversely, a robot that is overly trusted may be assumed to be more advanced than it truly is, resulting in over-reliance on an imperfect system [8]. One of the most widely used definitions for trust in HRI is that trust is "the attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability." [6]. As robots initially emerged into society in factory roles, their goals were clear and their performance was concretely measurable with metrics such as time to completion, number of errors in a given behavior, and behavior consistency over time. The humans who worked with them, therefore, could base their trust in the robots on how effectively they achieved these clearly defined goals.
People are inherently playful, and playfulness matters not only when engaging in actual play but also in all other activities. Based on this, I propose using a ludic design approach as a means to broaden the design space for Human-Robot Interaction (HRI). In this paper, I discuss the application of ludic design in HRI and explore how ludic activities can act as a mechanism for achieving new understandings from people during their interactions with robots. Two projects, BubbleBot and Sketching Robot, are presented as cases of designing ludic activities with robots. In my work, I have investigated how people perceived robots by applying exploratory research through design methods in creating the ludic experiences. I am continuously learning from my design and exploring potential areas for designing robots while identifying new values and goals for having robots in our lives.
Quality of life (QoL) is especially important for palliative care patients, who can be at risk of mental health issues. This project aims to design and implement an immersive control system for a robotic surrogate, that allows users to interact as naturally as possible with their loved ones from a remote location, and navigate appropriately, in order to improve QoL for those living with a terminal illness.
This thesis summary presents research focused on incorporating high-level abstract behavioral requirements, called 'conceptual constraints', into the modeling processes of robot Learning from Demonstration (LfD) techniques. This idea is realized via an LfD algorithm called Concept Constrained Learning from Demonstration. This algorithm encodes motion planning constraints as temporally associated logical formulae of Boolean operators that enforce high-level constraints over portions of the robot's motion plan during learned skill execution. This results in more easily trained, more robust, and safer learned skills. Current work focuses on automating constraint discovery, introducing conceptual constraints into human-aware motion planning algorithms, and expanding upon trajectory alignment techniques for LfD. Future work will focus on how concept constrained algorithms and models are best incorporated into effective interfaces for end-users.
In shared autonomy, human teleoperation blends with intelligent robot autonomy to create robot control. This combination enables assistive robot manipulators to help human operators by predicting and reaching the human's desired target. However, this reduces the control authority of the user and the transparency of the interaction. This negatively affects their willingness to use the system. We propose haptic feedback as a seamless and natural way for the robot to communicate information to the user and assist them in completing the task. A proof-of-concept demonstration of our system illustrates the effectiveness of haptic feedback in communicating the robot's goals to the user. We hypothesize that this can be an effective way to improve performance in teleoperated manipulation tasks, while retaining the control authority of the user.
Devices with multiple modes increase user stress, workload, and error rate. Robots that operate at multiple levels of autonomy offer a wide range of functions, but may increase error. In this work, we develop and test multimodal interfaces to communicate a robot's current level of automation to a user who is engaged in a high-workload task. Future studies will apply these findings to multiple-human, multiple-robot teams.
As robots find their way into homes, workplaces, and public spaces, rich and effective human-robot interaction will play an essential role in their success. While most sound-related research in the field of HRI focuses on speech and semantic-free utterances, the potential of sound as an implicit non-verbal channel of communication has only recently received attention and remains largely unexplored. This research will bring design approaches from the fields of sound design and spatial audio into the context of human-robot interaction to influence human perception of robot characteristics and refine non-verbal auditory communication. It will implement sound design systems into various physical robots and evaluate their effect through user studies. By developing design principles for the sonic augmentation of robots, we aim to provide the HRI community with new tools to enrich the way robots communicate with humans.
As robots are being designed to support older people in their living spaces, the use of robots in these contexts and particularly for AAL may come with issues of trust. While trust has gained increasing interest in HRI, we still have little understanding of trust when using robots in people's living spaces and taking into account their social practices. Drawing on literature, a methodological contribution and various (long-term) studies with older people and technology, the aim of this thesis is to define design guidelines for trust in situated human-robot interaction in older people's living spaces.
Imagine a future where a domestic robot ships with a state-of-theart learning from demonstrations (LfD) system to learn household tasks. You would like the robot to set the dinner-table for you when you get home at dinner time. After you demonstrate how to set the dinner table a couple of times. Would you be confident that robot will not try to place the saucer on top of the cup, or finish as much of the task as possible if an object was missing?
Social human--robot interaction is concerned with exploring the ways in which social interaction can be achieved between a human and a sociable robot. Affect has an important role in interaction as it helps interactants coordinate and indicate the success of the communication. Designing socially intelligent robots requires competence in communication, which includes exchanges of both verbal and non--verbal cues. This project will focus on non--verbal communication, more specifically body movements, postures and gestures as means of conveying socially affective information. Using the affective grounding perspective, which conceptualizes emotion as a coordination mechanism, together with honest signals as a measurement of the dynamics of the interaction, and the robot Pepper, we aim to develop a system that would be able to communicate affect, with the goal to enhance affective human--robot interaction.
This paper describes early work in the intersection of Mixed Reality for Human-Robot Interaction and Brain-Computer Interface fields. Our research seeks to answer these two questions: (1) How do different types of mental workload impact the effectiveness of different robot communication modalities? (2) How can a robot select the effective communication modality given information regarding its human teammate's level and type of mental workload?
One of the numerous approaches that increases the interaction quality between two people is having a proper understanding of the other person's perspective. In this doctoral thesis, we aim to understand children's perspective taking behavior, create a perspective taking framework for social robots, and evaluate the framework in educational scenarios and real-life interactions. The research started by designing tasks that allow us to analyze and decompose children's decision-making mechanisms in terms of their perspective taking choices. We collect data from series of studies that capture the dynamic between the child and the robot using different perspective taking tasks and develop a complementary adaptive model for the robot. This article summarizes the perspective taking tasks, experimental studies, and future works for developing a comprehensive model of perspective taking for social robots.
The present pollution problem can be partially attributed to the lack of empathy for learning any ecological and environmental literacy skills. Although robotics in education is increasing, there has been a lack of interest towards developing devices designed to teach children how to be environmentally conscious, and in particular, how to recycle. This gap is the basis for our robot, which we call the Smart Trash Junior, a mechatronic trashcan that uses vision recognition to identify recyclable objects and enters into a dialogue that educates children, within elementary schools, how to recycle.
Early life adversity is a major risk factor for the development of psychological and behavioral problems in adult life. Traumatic experiences in childhood are linked to higher rates of depression, anxiety disorders and a range of mental health issues.[4] Additionally, stories form the basis of understanding in children and help develop empathy and cultivate imaginative and divergent thinking in them. In this paper, we expound on the idea of leveraging the potential of storytelling through an interactive toy i.e. transitional object as a means of intentful intervention to help children understand and cope better with stressors in developmental ages.
Flumzis is a DIY social robot that optimizes the learning process of college students who spend the entirely night studying. It helps them by measuring their study time, monitoring break times, giving advices about how to eat healthy and tips to make the study night as optimized as possible. Flumzis also has a smart accessory, a intelligent base which allows to have a bunch of extra functions. Those work together in order to have better understanding of the functions.
The general topic of this inquiry looks at the methodology of design for new types of companions robots, in the context of a domestic setting. Personalization is essential, but most of Human Robot Interaction (HRI) research focus on adaptive behaviour for social interactions using commercially available devices. These robots represent finite projects, with very little room left for meaningful physical alterations. The goal of this research is to study the impact amongst users, of a robot offering by its conception, a high range of choice for personal customisation.
The "Sorting Hat", inspired from Harry Potter by J.K. Rowling was designed as a mediator social robot for autistic children. Since they are unable to interact with peers and show their emotions, the hat plays the role of a mediator to fill the communication gap. It is recommended to use the Sorting Hat in environments where children interact or in a domestic area. The hat was tested with 430 random set of volunteers to measure the attraction, the interactivity in use and the data demonstrated positive results. It also consists of an EEG sensor to record the raw brain wave data for future cognitive analysis.
The elderly are more affected by higher environmental temperatures. If they misperceive the temperature, it can lead to a number of potentially dangerous health issues. To address this, we propose a robot that sweats to indicate the high environmental temperature to the elderly. In this paper, we present the development of our first prototype and the design considerations of the second prototype.
In order to live more sustainably, it is important to reduce, reuse and recycle the waste we produce. Therefore, those three principles of the waste hierarchy should be adopted from an early age to become a natural part of everyday behavior. Our robotic monster MoBi is an interactive robot designed to teach children how to handle familiar waste types produced in the classroom setting. It supports children in the decision-making process involved in correct waste separation and educates them about ways to avoid waste in the first place.
In this project we present the design concept of RemindLy, a small egg-shaped robot with the purpose of reminding its user the daily to-do list. Users can interact with their RemindLy both using voice commands and physical interactions, designed as small to large tilting and rotations of the robot body. A small camera is placed on the robot to add a wake-up function when the face of the owner is recognized. To properly set-up the notes and customize the robot, a phone app will be developed.
The social pendulum is an oscillating robot, which exhibits certain personality traits based on its proximity to 'known' people through audile cues and changing patterns of its oscillations. The onboard camera(s) recognize faces of people in its surroundings and each new face is registered and a corresponding 'behavior metric' is prepared. Parameters such as time spent in its proximity, kind of words spoken etc. affects the individual's behavior metric which determines the pendulum's reaction.
Our robot is a very friendly, helpful and considerate morning-companion-clock, but also a very persuasive one! It uses multiple methods to effectively wake you up (light, sounds and movements), then it guides you through a simple stretching routine, accompanied by fresh aroma and soft music to help you make a perfect start of the day. Contrary to other alarm clocks, which wake you up by being annoying, the morning companion is so nice and pleasant, that you can form a bond with it and want to wake up to start the day in its company.
This work demonstrated shows the use of the Cozmo robot as a tool to engage and teach about traffic rules to younger children in a fun and interactive manner in Pakistan. This was the first effort of its kind to encourage traffic rules learning in children through using the social robot in the schools of Pakistan. Our ongoing work is towards understanding the social impact of this effort, mainly, finding whether children post-learning question parents about the traffic violation. We achieve this by creating a curriculum for social robots to teach about traffic rules at schools in Pakistan.
Creativity is at the core of what it means to be human. It is an intrinsic ability that we all have and influences our well-being self-expression throughout life. However, a decline in creativity abilities occurs in children around the age of 7 years old. Our work aims to contribute to a re-balance of creative levels using social robots. In this video, we describe YOLO, an autonomous robotic toy for children that fosters their creativity during play. This robot is envisioned to be used as a character during storytelling, promoting creative story-lines that might not emerge otherwise.
In the course of the past ten years, soft robotics has become a growing field of research. This video presents SONŌ, a soft robot that features real-time sound generation based on FM synthesis in accordance with its movements. The system was constructed to explore how sound might augment soft robotics technology and potentially facilitate more engaging interactions with soft robots.
This video presents how people responded to a robot asking for help at six cafes at the Oregon State University campus. Each cafe was visited twice over eight weeks between August and September 2019, always around lunchtime for a two-hour period. Many participants expressed their delight at the presence of the robot, as seen in their help and care behaviors, and communications with each other. The wizarded mobile robot, called a ChairBot, had a whiteboard indicating its current ordering request, as well as a money clip for payment. We conducted fly-on-the-wall observations, participant interviews, and grounded coding to understand why and how people helped the robot. People helped the robot because: (1) they were curious, (2) they wanted to help the people behind the robot, and (3) they wanted to be perceived as ethical. The video shows these interactions in context, with diverse human-robot communication strategies and unexpected emergent behaviors that illustrate the value of in-the-wild studies.
Using Augmented Reality (AR) interfaces, motion paths and tasks for co-bots in the factory can become visible and interactive. We present Kinetic AR, a system to control and manipulate motion of an MIR100 Automated Guided Vehicle (AGV) in Augmented Reality using the Reality Editor platform. The MIR100 robot performs a mapping of the environment using laser scanners. We synchronize the coordinate systems recognized by the smartphone and the AGV by performing spatial mapping. This allows for a seamless interaction where the user can control the motion of the AGV in an intuitive and spatial manner without any further technical requirements than a mobile phone. The user can perform path planning and visualize the motion of the AGV in real-time in AR. The synchronization of both environments allows for a usable manipulation where the AGV is aware of the position of the phone at all times and can perform actions such as following the user or moving towards the position where the phone is pointing on the floor. Moreover, motion checkpoints can be actionable and visually connected to other equipment in order to program the coordinated behavior of multiple systems. The platform is spatially aware and allows for a co-located seamless interaction between machines. We envision this technology as a usable interface for the creation and visualization of manifold AGV operations while maintaining a low entry barrier to complex spatial hardware programming.
As autonomous vehicles (AVs) become a reality on public roads, researchers and designers are beginning to see unexpected behaviors from the public. Ranging from curiosity to vandalism, these behaviors are concerning as AV platforms will need to know how to deal with people behaving unexpectedly or aggressively.
We call these antagonistic behaviors griefing of AVs, adopting the term from online gaming, which Warner and Raiter define as "Intentional harassment of other players...which utilizes aspects of the game structure or physics in unintended ways to cause distress''. We used the term griefing (rather than bullying), as not all behavior was intended to be violent or demeaning. However, any behavior that delays an AV's journey could be problematic for AV developers and consumers.
We observed ten griefing instances over four years and five studies of pedestrian-AV behavior in three countries. For each study, we modified a conventional vehicle to appear autonomous through fake LiDAR and decals saying "Driverless Vehicle''. The driver hid beneath a costume that looked like a car seat, allowing them to remain in control of the vehicle at all times while the vehicle appeared fully autonomous from the outside. Pedestrians were generally convinced of the illusion, as confirmed through interviews with consenting pedestrians and video recordings of all interactions. Full detail on the study, as well as proposed design principles to counter this behavior, will be published at HRI 2020 as a full paper.
These observations build on accounts of bullying towards robots that have been previously reported in the HRI community. While AV developers such as Uber and Waymo have shared anecdotes of past vandalism, we believe this to be the first public video made available that captures the range of griefing from playful to aggressive.
We hope this video stimulates conversation regarding appropriate design principles to counter griefing towards AVs. Several researchers study motivations behind this behavior, and it remains unclear how long it will take for it to naturally subside. In the meantime, AVs should be designed with this behavior in mind.
The EU-funded MuMMER project (\urlhttp://mummer-project.eu/ ) has developed a socially intelligent robot to interact with the general public in open spaces. One of the core tasks for the robot is to guide the visitors to specific locations in the mall. The primary MuMMER deployment location is Ideapark, a large shopping mall in Lemp\"a\"al\"a, Finland. The MuMMER robot system has been taken to the shopping mall several times for short-term co-design activities with the mall customers and retailers~\citevttlaas ; the full robot system has been deployed for short periods in the mall in September 2018, May 2019, and June 2019, and has been installed for a long-term, three-month deployment as of September 2019.
Stand-up comedy performance is one interesting environment in which to evaluate social robot abilities; the natural format and expectations of the art form make it well suited for experimentation. We brought Jon the Robot, our autonomous robotic stand-up comedian, to over 20 performances in Los Angeles and recorded video footage from some of these performances. Our video is an entertaining compilation of this footage in a mock movie trailer format, including light details about the capabilities of the system.
This is an explanatory video that gives a short overview of the CoWriting Kazakh project that aims to implement an autonomous behavior of a social robot that would assist and motivate children in learning a new script and its associated handwriting. The system integrates a real-time handwriting recognition of the Kazakh language to automatically convert Cyrillic to Latin. The algorithm was trained on the Cyrillic-MNIST dataset that contains more than 120,000 samples from more than 100 people. The system was utilized in the experiment with 67 children.
This workshop is the second in a series bringing together the Natural Language Generation and Human-Robot Interaction communities to discuss topics of mutual interest with the goal of developing an HRI-inspired NLG shared task. The workshop website is urlhttps://purl.org/nlg-hri-workshop/2020.
Social robotics has been productive in recent years, generating a variety of product concepts in conjunction with IoTs and smart house technologies. There are even social robots that made it to the market, a viable alternative to disembodied smart speakers. This itself is a good development considering that social robots remained a laboratory fixture in the past decade. Despite the positive outlook of social robotics, questions regarding its long-term adoption and commercial prospects still linger. The main challenge for social robotics is to find avenues in expanding current use-case applications to maximize the communication and interaction potential of the social robot, which is currently limited by copying smart speaker's functionalities. This workshop will gather roboticists, designers, engineers and stakeholders to discuss how to address these challenges to the broader consumer development of social robots by finding meaningful ways to use the rich modalities unique to social robots in creative content creation. Content in the form of apps has been the driving force in the success of smartphones. Similarly, in the gaming world, content dictates the adoption of a gaming platform. In this workshop, we will focus on creative content generation and its extension to the social robot arena, with the hope of replicating its success in other technology domains through the development of social robot apps not just for enhanced functionality but for more engaging and entertaining interactive applications.
The aim of this workshop is to discuss autonomous dialogue technologies in "Symbiotic Human-Robot Interaction." In order to inspire participants and encourage the discussion, we will introduce our research activities and have invited talks by experts in human-robot interaction and dialogue systems. We also publicly invite poster presentations to share state-of-the-art technologies in this research area. Finally, we clarify the key challenges required to develop companions living together with us through the panel discussion.
This second annual, full-day workshop aims to explore the metrology necessary for repeatably and independently assessing the performance of robotic systems in real-world human-robot interaction (HRI) scenarios. This workshop continues the motion toward bridging the gaps between the theory and applications of HRI, enabling reproducible studies in HRI, and accelerating the adoption of cutting edge technologies as the industry state-of-practice. The second annual workshop, "Test Methods and Metrics for Effective HRI," seeks to identify test methods and metrics for the holistic assessment and assurance of HRI performance in practical applications. The focus is on identifying the key performance indicators of seemingly disparate sectors, and to foster the community based on the principles of transparency, repeatability and reproducibility, and establishing trust. The goal is to aid in the advancement of HRI technologies through the development of experimental design, test methods, and metrics for assessing HRI and interface designs.
It has been argued that people are more inclined to accept a robot in their daily lives and even their home, if they have the feeling that the robot's behavior and interaction style matches their characteristics, needs and abilities. The concept of personalization has been introduced in order to create such personal, tailored human-robot interactions (HRI). Personalizing HRI means that robot takes into account individual user characteristics and can adjust its behavior to the situation and the human interaction partner. Previous HRI research indicates that personalization can have positive effects on the user experience during HRI as well as the user's attitudes towards and perceptions of the robot. This workshop is aimed at discussing the value of and exchanging ideas about the concept of personalization for HRI design. More concretely, we want to explore tools, methods and processes for appropriate robot behavior design and interaction modelling for personalized HRI. The workshop uses interactive, creative activities inspired by design thinking and participatory design to promote collaboration and co-creation between the participants and to inspire new, innovative perspectives on personalized HRI.
HRI research has predominantly focused on laboratory studies, producing a fundamental understanding of how humans interact with robots in controlled settings. As robots transition out of research and development labs into the real world, HRI research must adapt. We argue that it should widen its scope to explicitly include people who do not deliberately seek an interaction with a robot (users) but find themselves in coincidental presence with robots. We refer to this often-forgotten group as InCoPs (incidentally copresent persons). In this one-day workshop, we aim to explore studies, design approaches, and methodologies for testing robots in real-world environments, considering both users and InCoPs. The first part of the workshop will consist of invited talks addressing the subject from different angles, followed by plenary discussions. Building upon this common basis, participants will work in small groups to explore (1) human behavior, (2) robot and interaction design and (3) methodology, respectively. This group phase will focus on the exemplary scenario of delivery robots in urban environments. At the end, key aspects across all three topics will be identified and discussed to map out research needs and desirable next steps in the field.
Robotic systems are becoming increasingly complex, hindering people from understanding the robot's inner workings [24]. Simply providing the robot's source code may be useful for software and hardware engineers who need to test the system for traceability and verification [3], but not for the non-technical user. Plus, looks can be deceiving: robots that merely resemble humans or animals are perceived differently by users [25]. This workshop aims to provide a forum for researchers from both industry and academia to discuss the user's understanding or mental model of a robot: what the robot is, what it does, and how it works. In many cases it will be useful for robots to estimate each user's mental model and use this information when deciding how to behave during an interaction. Designing more transparent robot actions will also be important, giving users a window into what the robot is "thinking", "feeling", and "intending". We envision a future in which robots can automatically detect and correct inaccurate mental models held by users. This workshop will develop a multidisciplinary vision for the next few years of research in pursuit of that future.
Advances in Soft Robotics, Haptics, AI and simulation have changed the medical robotics field, allowing robotics technologies to be deployed in medical environments. In this context, the relationship between doctors, robotics devices, and patients is fundamental, as only with the synergetic collaboration of the three parties results in medical robotics can be achieved. This workshop focuses on the use of soft robotics technologies, sensing, AI and Simulation, to further improve medical practitioner training, as well as the creation of new tools for diagnosis and healthcare through the medical interaction of humans and robots. The Robo-patient is more specifically the idea behind the creation of sensorised robotic patient with controllable organs to present a given set of physiological conditions. This is both to investigate the embodied nature of haptic interaction in physical examination, as well as the doctor-patient relationship to further improve medical practice through robotics technologies. The Robo-doctor aspect is also relevant, with robotics prototypes performing, or helping to perform, medical diagnosis. In the workshop, key technologies as well as future views in the field will be discussed both by expert and new upcoming researchers.
This workshop focuses on issues surrounding human-robot interaction for robot self-assessment of system proficiency. For example, how should a robot convey predicted ability on a new task? How should it report performance on a task that was just completed? Communities in both computer science and robotics have addressed questions of introspection to monitor system performance and adjust behavior to guarantee or improve performance. Self-assessment can range from simple detection of proficiency up through evaluation, explanation, and prediction. Robots need the ability to make assessments and communicate them a priori, in situ, and post hoc in order to support effective autonomy and utilization by human partners and supervisors. This is a pressing challenge for human-robot interaction for a variety of reasons. Prior work has shown that robot expression of performance can alter human perception of the robot and decisions on control allocation. There is also significant evidence in robotics that accurately setting human expectations is critical, especially when proficiency is below human expectations. Therefore, more knowledge is needed on how systems should communicate specifics about current and future task competence.
The 3rd International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) will bring together HRI, Robotics, and Mixed Reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. VAM-HRI 2020 will follow on the success of VAM-HRI 2018 and 2019, and advance the cause of this nascent research community
This workshop aims at bringing together various disciplines to address the relationship between mentalizing or adopting the intentional stance towards robots and social attunement in human-robot interaction. The question will be tackled from the empirical, theoretical, computational, and philosophical approaches as well as potential applications in clinical domains. We invite speakers from the areas of cognitive and social neuroscience, psychology, computational modeling, cognitive science, human-robot interaction, robotics, and philosophy of mind, who address the question of conditions and consequences of attributing mental states to robots. We will also discuss individual differences in attitudes regarding robots, including variability in the likelihood of adopting the intentional stance towards artificial agents. The concluding discussion will focus on how empirical results influence the implementation of behavior in robots, and which application contexts should promote robot design that elicits mentalizing. In the discussion, we will also address ethical aspects related to evoking socio-cognitive mechanisms (including mentalizing) towards robots.
Both affect and embodiment have enormous importance for the field of HRI, which is increasingly interested in how the manifestation of the forms of robot embodiment influences the emotional state of the user. Designing and evaluating the affectivity of the robot body has become a frontier topic in HRI. To date, this is one of the few HRI workshops dedicated to affective robotics, and we propose three objectives: to identify relevant questions for the design of robotic bodies with high affective qualities; to consider cross-currents in ethical, philosophical, and methodological questions in studying emotional relations between humans and robots; and to foster synergies among designers, engineers, and social scientists in affective robotics.