Abyss Creations LLC is selling highly realistic and anatomically complete dolls for decades. The interaction that customers have with the dolls has been subject to many controversial discussions. David Levy already showed that humanity has been rather inventive when it comes to machines that useful for sexual purposes. Abyss Creations recently revealed its Harmony platform that consists of a robotic head that is attached to their doll bodies. This paper present an interview with Matthew McMullen, CEO and Creative director of Abyss Creations about the Harmony platform and its implications on human-robot relationships.
Inside AI research and engineering communities, explainable artificial intelligence (XAI) is one of the most provocative and promising lines of AI research and development today. XAI has the potential to make expressible the context and domain-specific benefits of particular AI applications to a diverse and inclusive array of stakeholders and audiences. In addition, XAI has the potential to make AI benefit claims more deeply evidenced. Outside AI research and engineering communities, one of the most provocative and promising lines of research happening today is the work on "humanoid capital" at the edges of the social, behavioral, and economic sciences. Humanoid capital theorists renovate older discussions of "human capital" as part of trying to make calculable and provable the domain-specific capital value, value-adding potential, or relative worth (i.e., advantages and benefits) of different humanoid models over time. Bringing these two exciting streams of research into direct conversation for the first time is the larger goal of this landmark paper. The primary research contribution of the paper is to detail some of the key requirements for making humanoid robots explainable in capital terms using XAI approaches. In this regard, the paper not only brings two streams of provocative research into much-needed conversation but also advances both streams.
The field of Human-Robot Interaction (HRI) lies at the intersection of several disciplines, and is rightfully perceived as a prime interface between engineering and the social sciences. In particular, our field entertains close ties with social and cognitive psychology, and there are many HRI studies which build upon commonly accepted results from psychology to explore the novel relation between humans and machines. Key to this endeavour is the trust we, as a field, put in the methodologies and results from psychology, and it is exactly this trust that is now being questioned across psychology and, by extension, should be questioned in HRI. The starting point of this paper are a number of failed attempts by the authors to replicate old and established results on social facilitation, which leads us to discuss our arguable over-reliance and over-acceptance of methods and results from psychology. We highlight the recent "replication crisis" in psychology, which directly impacts the HRI community and argue that our field should not shy away from developing its own reference tasks.
In human-robot interaction (HRI) people have studied user preferable robot actions in various social situations. The role of the robot is often designed, and situations are assumed that the robot will interact with the human. However, there are also situations where either the human or robot may not be willing to interact. In such situations, the human and robot are under a goal conflict and must first agree to begin an interaction. In this paper, we re-explore interaction beginnings and endings as a confliction and agreement between human and robot goals - the willingness of whether to interact or not. Through our discussion, we categorize conflict/agreement interactions into nine situations. Using a probabilistic analysis approach and 93 HRI recordings, we evaluate the different human behaviors in different interaction situations. We further question whether learning from typical existing HRI would benefit other scenarios when a robot has physical task capabilities. We conclude that the benefits of understanding different agreement situations would largely depend on a robot»s task capability as well as the human»s expectation toward these capabilities; however, conflict and agreement should not be neglected when applying interaction capability to physical-task-capable robots. Our research also suggests the probabilistic drawbacks of robot speech in situations where both the human and robot are not willing to interact.
Robot appearance morphology can be divided in anthropomorphic, zoomorphic and functional. In previous recent work, a new catego-ry was introduced, called "theomorphic robots", in which robots carry the shape and the identity of a supernatural creature or object within a religion. This approach can bring some advantages for certain categories of users such as children and elders. This paper is an exploratory discussion over practical design strategies for repre-senting the divine in robots, based on theoretical insights on the historical intertwinements between sacred art and robotics. The illustrated concepts will be followed in the realisation of the proto-types of the first theomorphic robots.
Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.
Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. In this tabletop demonstration it is possible to interact with two soft robotic platforms that have been used in human-robot interaction experiments (also accepted to HRI'18 as a Late-Breaking Report and a video).
Research has shown that rehabilitation through repetitive automatic orthosis can be beneficial for children with cerebral palsy, however, it is expensive, requires a lot of room, and is rarely available in rehabilitation clinics. A different design approach was utilized to build an exoskeleton that can provide robot-assisted therapy while being mobile, weight-bearing, and safe. The Trexo device is fully powered, provides responsive support and allows data monitoring. Using this device, the goal is to provide affordable and accessible neurorehabilitation for children with disabilities.
Buddy with its 60 centimeters and 4 degrees of freedom is a highly engaging social robot. Buddy deploys many assets for social interaction such as its appealing design, its anthropomorphic face able to display emotional reactions and its ability to be proactive to look for a user and propose activities.Buddy is developed by Blue Frog Robotics, a French start-up based in Paris, and aims to be the friendly companion for the whole family. With its intuitive SDK based on the famous UNITY game engine, Buddy is also designed to be a tool for research and education.
Future companion and assistive robots will interact directly with end-users in their own homes over extended periods of time. To be useful, and remain engaging over the long-term, these technologies need to pass a new threshold in social robotics-to be aware of people, their identities, emotions and intentions and to adapt their behavior to different individuals. Our immediate goal is to match the social cognition ability of companion animals who recognize people and their intentions without linguistic communication. The MiRo robot is a pet-sized mobile platform, with a brain-based control system and an emotionally-engaging appearance, which is being developed for research on companion robotics, and for applications in education, assistive living and robot-assisted therapy. This paper describes new MiRo capabilities for animal-like perception and social cognition that support the adaptation of behavior towards people and other robots.
This demonstration will present a software concept and architecture for the control robot swarms for user studies. Exploring user perception of a swarm»s motion and the dynamics of its members is a topic of growing interest. However, at the time of writing, no structured methodology that ensures repeatable and scalable studies is available. We postulate that a swarm»s motion can be controlled with three complementary aspects: its behaviour, its agents» state, and the task at hand. We developed a software solution that allows a researcher to quickly implement a new model for each of the aspects, and test it with users in a repeatable manner.
We describe the design and implementation of Blossom, a social robot that diverges from convention by being flexible both internally and externally. Internally, Blossom»s actuation mechanism is a compliant tensile structure supporting a free-floating platform. This design enables organic and lifelike movements with a small number of motors and simple software control. Blossom»s exterior is also flexible in that it is made of fabric and not rigidly attached to the mechanism, enabling it to freely shift and deform. The handcrafted nature of the shell allows Blossom to take on a variety of forms and appearances. Blossom is envisioned as an open platform for user-crafted human-robot interaction.
Current assistive devices to help disabled people interact with the environment are complicated and cumbersome. Our approach aims to solve these problems by developing a compact and non-obtrusive wearable device to measure signals associated with human physiological gestures, and therefore generate useful commands to interact with the smart environment. Our innovation uses machine learning and non-invasive biosensors on top of the ears to identify eye movements and facial expressions. With these identified signals, users can control different applications, such as a cell phone, powered wheelchair, smart home, or other IoT (Internet of Things) devices with simple and easy operations. Combined with VR headset, the user can use our technology to control a camera-mounted telepresence robot to navigate around the environment in the first-person's view (FPV) by eye movements and facial expressions. It enables a very intuitive way of interaction totally hands-free and touch-free.
A key element of system transparency is allowing humans to calibrate their trust in a system, given the implicit inherent uncertainty, emergent behaviors, etc. As robotic swarms progress towards real-world missions, such transparency becomes increasingly necessary in order to reduce the disuse, misuse and errors humans make when influencing and directing the swarm. However,achieving this objective requires addressing the complex challenges associated with providing transparency. Two swarm transparency challenge categories, with exemplar challenges, are provided.
In this paper we present the results of a preliminary study on the impact of educational robotics on students» self-concept beliefs. Some differences have emerged in students with low self-esteem and little faith in the possibility of improving their intelligence.
Previous studies in HRI have found that timing of reliability drops during a human-robot interaction affect real-time trust and control allocation strategy. Studies have also been conducted to examine the impact of providing continuous feedback about robot confidence on trust, operator workload and optimisation of control allocation strategy, during a human-robot collaborative task. In this paper, we discuss how we wish to further explore different methodologies of giving feedback and study their impact on trust, control allocation and workload. We would be conducting new studies using the similar study design as used by Desai et al. [1] in their study. We will be incorporating a few changes in the method of providing feedback. The goal is to compare the results of the new study with the old one and to have an analysis of the effect of different feedback methodologies on real-time trust, workload and control allocation strategy.
Fully autonomous vehicles provide an opportunity to improve current transportation solutions; both for drivers and for people unable to drive. In this paper we present the preliminary results of a study aiming to understand user needs and expectations for autonomous vehicle interfaces. We found that users expect a different type of information to be fed back to them depending on whether the vehicles are privately owned or shared. The results of this study will be confirmed by further work and contribute to the development of a baseline fully autonomous vehicle user interface.
We tested whether a unidimensional measure of an individual's predisposition to anthropomorphize (IDAQ score) was predictive anthropomorphism ratings along a two-dimensional ratings scale (based on Haslam's dehumanization model). Results indicate that, when anthropomorphizing robots, IDAQ scores may be more predictive of the Uniquely Human dimension of anthropomorphism than of the Human Nature dimension.
The Sensing, Computing, Interactive Platform for Research Robotics (SCIPRR), is a humanoid, head-shaped sensor package designed to balance both sensing and interaction needs within HRI. Designed to maximize reconfigurability and modularity, SCIPRR can accommodate most sensors and computers with little effort. We report the process of adapting SCIPRR for audio and point-cloud sensors through the use of 3D modeling and printing
Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.
Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.
Telepresence facilitates social presence over geographically separated individuals and is commonplace in modern organizations. However, few studies explore individuals' perceptions of leadership and communication quality when such telepresence technologies are used. With consideration to the Human-to-Human Interaction Script Theory, a two-group posttest experiment was conducted between in-person and telepresence group leaders. Results indicated higher ratings of social attractiveness and leadership quality for the telepresence leader vs. the in-person leader.
This paper presents two studies that seek to investigate how gendered robots voices have influence on a human's trust. The first study employs a questionnaire in which participants evaluate their willingness to share personal information with a gendered robot shown through video recordings. The second study utilises a live experiment of participants interacting with the NAO robot to investigate said area through real dialogue. The studies show a small favouring of a male-voiced robot in relation to trust and personal information, though further studies are still necessary.
Human-robot collaboration is becoming more common in factories. In this paper, we present our designs for methods for providing information to a person about the robot»s intent before the robot moves into the shared work space. We discuss our plan for a human-subjects study to determine which methods are best to express the robot»s intent in an easily understandable way. Our study is also designed to determine the optimal distance from the work space to show signals of intent on the robot itself. The goal of our work is to improve efficiency and safety in human-robot collaboration.
Remote robotic manipulation is a challenging task since integration of cognition, perception, and action within and between the human and robot agents is required. Adherence to Weber's law in reach-to-grasp motion can be used for objectively examining potential interactions between visual perception and visuomotor control. Previous work in using Weber's law for examining telerobotic control tested a system with delays, a 2D virtual interface, and a high-end system with negligible delays. We present two new remote environments: a telerobotic setup with negligible delays, and a 3D virtual environment that will be additionally used to examine adherence to Weber's law.
Today's teens will most likely be the first generation to spend a lifetime living and interacting with both mechanical and social robots. Although human-robot interaction has been explored in children, adults, and seniors, examination of teen-robot interaction has been minimal. Using human-centered design, our team is developing a social robot to gather stress and mood data from teens in a public high school. As part of our preliminary design stage, we conducted a interaction pilot study in the wild to explore and capture teens' initial interactions with a low-fidelity social robot prototype. We observed strong engagement and expressions of empathy from teens during our qualitative, interaction studies.
This study, investigates whether stress reducing effects as they are known from human-human interaction also occur while humans interact with robots. To analyze the individual stress level, 63 participants were asked about their stress level before and after they interacted with a humanoid robot. Three different conditions of touch behavior were analyzed in the course of this study: active touch, passive touch as well as no touch. However, as the results of this study indicate, there seems to be no difference between those three touch conditions and the individual perception of stress.
Medical staff uses Patient Reported Outcome Measurement (PROM) questionnaires as a means of collecting information on the effectiveness of care delivered to patients as perceived by the patients themselves. Especially for the older patient group, the PROM questioning poses an undesirable workload on the staff. This proof of concept paper investigates whether a social robot with a display can conduct such questioning in an acceptable and reliable way. A set of 15 typical questions was selected from existing PROM questionnaires. For the asking, answer-processing and responding, a multi-modal robot-dialogue was designed and implemented. In a within-subjects experiment, 31 community-dwelling older participants answered the 15 questions in two conditions: questioning by the robot, versus questioning by a human. The main part of the robot questioning provided reliable answers, but took somewhat more time compared to human questioning. The experiment demonstrated the feasibility of a social robot for an acceptable and reliable collection of PROM data from older persons.
An experiment was conducted where a robotic platform performs artificially generated gestures and both trained classifiers and human participants recognize. Classification accuracy is evaluated through a new metric of coherence in gesture recognition between humans and robots. Experimental results showed an average recognition performance of 89.2% for the trained classifiers and 92.5% for the participants. Coherence in one-shot gesture recognition was determined to be gamma = 93.8%. This new metric provides a quantifier for validating how realistic the robotic generated gestures are.
In this paper we present the implementation of a robot, that dynamically hesitates, based on the attention of the human interaction partner. To this end, we outline requirements for a real-time interaction scenario, describe the realization of a disfluency insertion strategy, and present observations from the first tests of the system.
Socially Assistive Robots (SAR) have been gaining significant attention in multiple health care applications. However, SAR has not been fully explored in cardiac rehabilitation (CR). One of the most critical issues in CR is the lack of adherence of patients to the rehabilitation process. Hence, based on the evidence that the presence of an embodied agent increases compliance, we present in this paper the integration of a social robot in a CR programme. The setup is evaluated with four patients divided into two conditions (robot and no robot), in order to evaluate its first four sessions as a preliminary study. The results show that this system might have a promising impact on CR and holds promise to be extended to a larger group of patients.
Social robots are becoming more prevalent in our daily environments. In these contexts, many researchers have studied how robots communicate with people. However, not enough attention has been paid to social interaction between robots and people with disabilities. In particular, little attention has been devoted to deaf and blind children interacting with robots. In this work we propose a physical haptic interaction mode that allows the communication between anthropomorphic robots and people with visual-auditory impairments. We have developed a scenario where this new interaction mode will be evaluated.
Here we present a projection augmented reality (AR) based assistive robot, which we call the Pervasive Assistive Robot System (PARS). The PARS aims to improve the quality of life by of the elderly and less able-bodied. In particular, the proposed system will support dynamic display and monitoring systems, which will be helpful for older adults who have difficulty moving their limbs and who have a weak memory.We attempted to verify the usefulness of the PARS using various scenarios. We expected that PARSs will be used as assistive robots for people who experience physical discomfort in their daily lives.
Learning companion robots can provide personalized learning interactions to engage students in many domains including STEM. For successful interactions, students must feel comfortable and engaged. We describe an experiment with a learning companion robot acting as a teachable robot; based on human-to-human peer tutoring, students teach the robot how to solve math problems. We compare student attitudes of comfort, attention, engagement, motivation, and physical proximity for two dyadic stance formations: a face-to-face stance and a side-by-side stance. In human-robot interaction experiments, it is common for dyads to assume a face-to-face stance, while in human-to-human peer tutoring, it is common for dyads to sit in side-by-side as well as face-to-face formations. We find that students in the face-to-face stance report stronger feelings of comfort and attention, compared to students in the side-by-side stance. We find no difference between stances for feelings of engagement, motivation, and physical proximity.
Research in socially assistive robotics (SAR) has shown potential to supplement expensive and sometimes inaccessible therapy for children affected with autism spectrum disorder (ASD). However, due to practical constraints, most SAR research has been limited to short-term studies in controlled environments. In this report, we present a 30-day, in-home case study of a fully autonomous SAR intervention designed for children with ASD and discuss its insights into the value of personalized, long-term, and situated interaction.
As healthcare shifts towards a patient centered model, robotic technology can play an important role in monitoring, informing, supporting, and connecting independently living individuals with various physical and mental health conditions. As part of a study evaluating the use of the Socially Assistive Robot (SAR) Paro in the homes of older adults with depression, we performed two focus groups with clinicians to discuss how they might use sensor data collected by domestic SARs in clinical practice. In the first focus group, participants discussed potential uses of currently available SARs and sensors by them and their clients. The second focus group took place after sensor data from sensors onboard the Paro robot had been collected in older adults' homes. Clinicians considered the data and what information might be most useful for supporting clinical care. Data regarding monitoring client health, such as behavioral changes in sleep and daily activity levels, were of particular interest to clinicians. They also suggested using SARs to provide clients with information and interaction could help them develop coping skills and alleviate symptoms.
Autism Spectrum Disorder (ASD) is a complex developmental disorder that requires personalising the treatment to the personal condition, in particular for individuals with Intellectual Disability (ID), which are the majority of those with ASD.
In this paper, we present a preliminary analysis of our on-going research on personalised care for children with ASD and ID. The investigation focuses on integrating a social robot within the standard treatment in which tasks and level of interaction are adapted to the ID level of the individual and follow his progress after the rehabilitation.
Teleoperating a mobile robot over rough terrain is difficult with current interaction implementations. These implementations compromise the human operators» situation awareness acquisition of the mobile robot»s attitude, which is crucial to maintain a safe teleoperation. So, we developed a novel haptic device, to relay a mobile robot»s attitude (roll and pitch) to the human operator. A user experiment was performed to evaluate the efficacy of this device in two configurations. A natural attitude configuration between the robot and haptic device, and an ergonomic attitude configuration, which shifts the representation of pitch to the yaw axis. Our results indicate participants were able to successfully perceive the attitude state in both configurations, the natural 58.79% and the ergonomic 63.18% of the times, both are significantly above the 1/3 probability chance. Interestingly, the perception of attitude state was significantly higher in the roll axis over the pitch axis, for the critical and unstable states.
Before determining if an interaction with a social robot benefits the person with dementia (PwD), it is necessary to understand the process of acceptance of the robot by the PwD. Hence, we proposed strategies to facilitate the acceptance by PwD in real environments. We enforced these strategies in a conversational robot - called Eva, and conducted a five-week study with six PwD to analyze the impact of the strategies in the acceptance. Our preliminary results suggest that add a pleasure component such as music to the interaction improve the interaction and could enact a better bonding. Moreover, a facilitator promotes the interaction between PwD and robot, but gradually the participation of the facilitator decrease to become a direct PwD-robot interaction.
The emergence of robots in everyday life raises the question of how people explain the behavior of robots---in particular, whether they explain robot behavior the same way as they explain human behavior. However, before we can examine whether people»s explanations differ for human and robot agents, we need to establish whether people judge basic properties of behavior similarly regardless of whether the behavior is performed by a human or a robot. We asked 239 participants to rate 78 behaviors on the properties of intentionality, surprisingness, and desirability. While establishing a pool of robust stimulus behaviors (whose properties are judged similarly for human and robot), we detected several behaviors that elicited markedly discrepant judgments for humans and robots. Such discrepancies may result from norms and stereotypes people apply to humans but not robots, and they may present challenges for human-robot interactions.
We report on a field exercise in which a team of human fire-fighters used robots to enact a realistic disaster response mission in an industrial environment. In this exercise we evaluated the technical working of an integrated robotic system and gained insights concerning the manner in which robots and information streams can be utilized effectively. We have learnt important lessons regarding the employment of human-robot teams in complex, realistic missions.
This paper presents a study investigating perceptions of an agent (human, robot, or A.I. computer program) delivering a treatment plan. Results demonstrate that human, robot, and A.I. physicians were perceived to be credible and attractive. However, the human physician received higher ratings when compared to the machine agents. Results are explained by the higher social presence attributed to the human physician.
The One Hundred Year Study on Artificial Intelligence's Report of the 2015 Study Panel [1] predicts artificial intelligence (AI) to 'enhance education at all levels, especially by providing personalization at scale' (p.31). Interactive machines are already tutoring students in classrooms. It is predicted that the use of these technologies in our lives, including in classrooms, will drastically increase in over the next fifteen years. This paper presents a pilot study examining the Co-Bot experience of students participating in a robotics competition. The study focuses on child robot interaction (cHRI) in education by looking at students identified as active users of social robots. Data was collected during the World Robot Summit Junior Category, School Robot Challenge Workshop and Trial 2017 held in Tokyo, Japan in August 2017.
In this paper, we discuss the role of the movement trajectory and velocity enabled by our tele-robotic system (ReMa) for remote collaboration on physical tasks. Our system reproduces changes in object orientation and position at a remote location using a humanoid robotic arm. However, even minor kinematics differences between robot and human arm can result in awkward or exaggerated robot movements. As a result, user communication with the robotic system can become less efficient, less fluent and more time intensive.
There is a growing body of knowledge on how people interact with robots, but limited information on the difference between young and old adults in their preferences when interacting with humanoid robots. Our goals in the current study were: (1) to investigate the difference between age groups in how they relate to a humanoid robot, and (2) to test whether they prefer an interaction with the robot over an interaction with a computer screen. Thirty old adults and 30 young adults took part in two experiments, where they were asked to complete a cognitive-motor task. Both old and young adults reported they enjoyed the interaction with the robot as they found it engaging and fun, and preferred the embodied robot over the non-embodied computer screen. We found that a slow response time of the robot had a negative influence on user's perception of the robot, and their motivation to continue interacting with it.
Telepresence robots hold the potential to allow absent students to remain physically embodied and socially connected in the classroom. In this work, we investigate the effects of telepresence robot personalization on K-12 students» perceptions of the robot, perceptions of themselves, and feelings of self-presence. We conducted a between-subjects, 2-condition user study (N=24) on robot personalization. In this study, 9- to 13-year-old participants remotely completed an educational exercise using a telepresence robot. Lessons learned from this study will inform our continued work on using remote presence robots to preserve the educational and social experiences of students during extended absences from school.
Telepresence robots have the potential for improving human-to-human communication when a person cannot be physically present at a given location. One way to achieve this is to construct a system that consists of a robot and video conferencing setup. However, a conventional implementation would involve building a separate server or control path for teleoperation of the robot in addition to the video conferencing system. In this paper, we propose an approach to robot teleoperation via a video call that does not require the use of an additional server or control path. Instead, we propose directly teleoperating the robot via the audio and video signals of the video call itself. We experiment on which signals are most suitable for this task and present our findings.
Robots» usage in the fields of human support and healthcare is wide-spreading. Robotic devices to assist humans in the self-feeding task have been developed to help patients with limited mobility in the upper limbs but the acceptance of these robots has been limited. In this work, we investigate how to quantitatively evaluate the comfort of an eating assistive device by estimating the interaction forces between the human and the robot when eating. We experimentally verify our concept with a commercially-available eating assistive device and a human subject. The evaluation results demonstrate the feasibility of our approach.
Understanding explanations of machine perception is an important step towards developing accountable, trustworthy machines. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision, and it is difficult to test performance in a safe environment. To couple human visual understanding and machine perception, we present an explanatory system for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds. We also contribute a novel scene description dataset, generated natively in virtual reality containing speech, image, gaze, and acceleration data. We discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments.
With robot technologies becoming more readily available in classroom settings, we integrated a three week project using commercially available robots as prototyping kickoff project for their year long System Engineering (SE) capstone course. As part of this undergraduate class, students assembled and programmed robots in four teams to compete against each other in the 'Robot Deathmatch Competition'. They were given real-world requirements, customers, materials, and deadlines to prepare them for larger scale projects. The project integrated lessons from previously taught courses into a condensed technology-inclusive experience and set students on a successful path for their upcoming year long projects.
Trust is a vital determinant of acceptance of automated vehicles (AVs) and expectations and explanations are often at the heart of any trusting relationship. Once expectations have been violated, explanations are needed to mitigate the damage. This study introduces the importance of timing of explanations in promoting trust in AVs. We present the preliminary results of a within-subjects experimental study involving eight participants exposed to four AV driving conditions (i.e. 32 data points). Preliminary results show a pattern that suggests that explanations provided before the AV takes actions promote more trust than explanations provided afterward.
This study focuses on the use of robotic advisers on the jury's ability to make a decision on the length of criminal sentences in a court scenario. Based on a preliminary study(Hayashi & Wakabayashi, 2017), we further investigated a human's negative feelings toward robots, such as the 'robophobia'; it was suggested that individuals with a higher degree of robophobia may not comply with the robot's suggestions. We conducted a laboratory experiment using a simple jury decision making task, where a robot suggested the length of time as punishment for a particular crime. We discovered whether the participants would make a decision based on the robot's suggestion and examined the correlations between the length of sentence chosen by the participants and their degree of robophobia. Results do not show statistical significance; however, we found that there is a possibility that participants comply with robots when they feel less negatively towards the robot.
High-stress environments, such as a NASA Control Room, require optimal task performance, as a single mistake may cause monetary loss or the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either»s task performance. Workload is directly correlated with task performance; thus, a robot may use a human»s workload state to modify its interactions with the human. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented.
The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Having emotions is essential for robots to understand and sympathize with the feelings of people. In addition, it may allow the robots to be accepted into human society. The role of emotions in decision-making is another important perspective. In this paper, a model of emotions based on various neurological and psychological findings that are related to empathic communication between humans and robots is proposed. Subsequently, a mechanism of decision-making that is based on affects using convolutional LSTM and deep Q-network is examined.
One of the advantages of teaching robots by demonstration is that it can be more intuitive for users to demonstrate rather than describe the desired robot behavior. However, when the human demonstrates the task through an interface, the training data may inadvertently acquire artifacts unique to the interface, not the desired execution of the task. Being able to use one»s own body usually leads to more natural demonstrations, but those examples can be more difficult to translate to robot control policies. This paper quantifies the benefits of using a virtual reality system that allows human demonstrators to use their own body to perform complex manipulation tasks. We show that our system generates superior demonstrations for a deep neural network without introducing a correspondence problem. The effectiveness of this approach is validated by comparing the learned policy to that of a policy learned from data collected via a Sony Play Station~3 (PS3) DualShock 3 wireless controller.
This paper presents a novel robot-assisted intervention framework designed to target sensory processing and emotion regulation difficulties in children with autism spectrum disorder. Three types of systems are utilized to provide scalable robotic interactions with gesture-based and character-based socio-emotional expressions. The intervention framework includes (1) an emotional interaction and regulation game in mobile computing settings, (2) interactive robotic sessions with socio-emotional scenarios, and (3) gesture identification games to measure emotion processing and verbal skills in socio-emotional settings. This paper also presents preliminary results obtained from a pilot study conducted to evaluate the three interventions.
Autonomous vehicles (AVs) have the potential to improve road safety. Trust in AVs, especially among pedestrians, is vital to alleviate public skepticism. Yet much of the research has focused on trust between the AV and its driver/passengers. To address this shortcoming, we examined the interactions between AVs and pedestrians using uncertainty reduction theory (URT). We empirically verified this model with a user study in an immersive virtual reality environment (IVE). The study manipulated two factors: AV driving behavior (defensive, normal and aggressive) and the traffic situation (signalized and unsignalized). Results suggest that the impact of aggressive driving on trust in AVs depends on the type of crosswalk. At signalized crosswalks the AV»s driving behavior had little impact on trust, but at unsignalized crosswalks the AV»s driving behavior was a major determinant of trust. Our findings shed new insights on trust between AVs and pedestrians.
Although there are a substantial number of human-subject studies that have been evaluated multimodal interactions used in in-vehicle human-automation systems, few modeling studies have been conducted using computational cognitive architectures. This report introduces a computational model to predict task completion time and workload when two human-automation interaction methods (i.e., remote-manual and voice controls) were used. The model was evaluated with 35 human subjects' data for an in-vehicle task to adjust the comfort level of a particular area of the driver seat.
This paper discusses robots exhibiting "human" characteristics and the associated implications for anthropomorphism research. Section 1 discusses Haslam's\citeHaslam:2006aa model of dehumanization, which has been used to conceptualize anthropomorphism as the inverse of dehumanization. Section 2 provides examples of robots that exhibit the human characteristics in \citeHaslam:2006aa. Section 3 describes their theoretical and practical implications for HRI research.
Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. This study investigates how a silicone-based pneumatically actuated soft robotic tentacle is perceived in interaction. Quantitative and qualitative data was gathered from questionnaires (N=47) and video recordings. Results show that the overall appeal of the robot was positively associated with its perceived naturalness. They further indicate a slight user preference for the movements and the tactile qualities of the robot and a slightly negative evaluation of its appearance.
Hearing-impaired communities around the world communicate via a sign language. The focus of this work is to develop an interpreting human-robot interaction system that could act as a sign language interpreter in public places. This paper presents an ongoing work, which aims to recognize fingerspelling gestures in real time. To this end, we utilize a deep learning method for classification of 33 gestures used for fingerspelling by the local deaf-mute community.
In order to train and test the performance of the recognition system, we utilize previously collected dataset of motion RGB-D data of 33 manual gestures. After applying it to a deep learning method, we achieved an offline result of an average accuracy of 75% for a complete alphabet recognition. In real time, the result was only 24.72%. In addition, we integrated a form of auto-correction in order to perform spell-checking on the recognized letters. Among 35 tested words, four words were recognized correctly (11.4%). Finally, we conducted an exploratory study inviting ten deaf individuals to interact with our sign language interpreting robotic system.
We investigated the effects of physical contact of robots on the user's acceptance in the functional intimate distance. We conducted a two (robot interaction types: interaction with physical contact vs. interaction with a tool) within-participants experiment (N=18). This study was a video-based observation study. According to the experimental results, the evaluation of participants on the empathy and sociability of the robot was not affected by physical contact in the functional intimate zone. On the other hand, the participants felt secure and perceived that the robot was knowledgeable when the robot measured the patient's temperature with a thermometer instead of its hand.
A good understanding of handovers between humans is critical for the development of robots in the service industry. Here investigated the extent to which humans estimate their partner's behavior during handovers. We show that, even in the absence of visual feedback, humans modulate their handover location for partners they have just met, and according to their distance from the partner, such that the resulting handover errors are consistently small. Our results suggest that humans can predict each other's preferred handover location.
We evaluate abrupt turn signalling using Mixed Reality avatars with two methods: A method where the avatar signals using its body, and a method where a path is rendered on the floor. Results indicate that study participants prefer the body method but that the path method is more accurate when the path is longer.
Autonomous robots deployed around humans must be able to ask for help when problems arise. However, people may have incorrect mental models of the robots» capabilities or task, making them unable to help. We propose a data-driven method to estimate humans» beliefs after hearing task-related utterances and build sets of utterances that influence people towards useful help in expectation. We present an example to show our method selects effective utterances when the desired help is much different than a person expects.
The use of chatbots is more common in our everyday lives than ever before. However, few studies have been conducted comparing the differences between text- and voice-input modalities of chatbots in the banking industry. In this study, through empirical and survey-based research, users were shown to rate their relationships with the banking chatbot as more helpful and self-validating when they communicate with it by a voice-input modality than by a text-input modality.
The active haptic perception is one of the primary function of a real robot. Most previous studies on tactile perception have used simplified tactile information interpreted through visual perception. However, data from visual interpretation need a large number of trials and there is a risk of critical errors in the real environment. In this paper, we developed a glove that can map the three-dimensional haptic detection result of the robot tactile sensors to intuitive vibration patterns. We also proposed a haptic mapping algorithm that combines three types of vibration modes. The contact information of the robot tactile sensors was encoded to vibration patterns to represent position and strength of the contact status. Using vibration transferred through the proposed device, a human operator could successfully grasp an object in unknown position. With the proposed system, a robot can learn more effective manipulation strategy by mimicking human reactions to the haptic sensation.
This paper describes behavior design of a social robot for child-robot interaction in a public place towards a social robot which can interact with children friendly while advertising products and services. First, we developed a robot system by which a humanoid robot Pepper can interact with people through greeting, waving a hand, shaking hands and asking people for stroking the head of the robot. Then, we observed the actual interaction of the robot and a group of people including children in a science museum. Finally, we designed robot behaviors for children in a public place.
One of the tasks of science communication is a discussion between citizens and scientists about common issues related science technology. This paper reports people's opinion about an interactive robot in our daily life which we gathered through activities of science communication. We asked people what kind of roles they prefer for a robot and whether it is necessary that robots make friends with people in our lives.
Sports training with conventional feedback devices, such as mirror and video, has some problems. For example, it is difficult to accurately grasp all joint angles for target skill, and to gaze and to have interest at each joint angle. To settle such problems, we have proposed a new training method using a coached humanoid robot to reproduce participant's motion based on measured position data by motion capture system. The proposed method shows a superior training effort, since the robot is used not only for a feedback device and also for an avatar of self-coaching. As a basic analysis of the proposed method, this paper investigates posture memory retentivity, which is one of the important factors that influence the success of the proposed method, instead of motion for target skill. From experimental result, it can be seen that the training with the coached humanoid robot shows superior posture memory retentivity compared to the other two feedback device conditions, such as no feedback and mirror.
We propose "Shelly", a robot for restraining children»s abusive behaviors toward the robot. The robot»s overall system and its unique multi-modal interface are discussed first. Then, the result of an early stage evaluation of robot»s designed behaviors is explained. Finally, a primary function - the robot suspending all interactions for a period by hiding its head and limbs inside the shell - is proposed to restrain children»s robot abusing. The results show that the function, with an appropriate operating time, effectively reduces children»s robot abusing while maintaining their engagement with the robot.
Robots used for physical human-robot interaction (pHRI) are currently advancing from being simple stand-alone manipulators passing tools or parts to human collaborators to becoming autonomous co-workers that continuously share operational control with their human partners. One of the major challenges in this transition is to extend robot capabilities in sensing human motions and behaviour, thereby allowing for more seamless cooperation and ensuring the safety of human partners. Currently, there is a gap between the desire for humans and robots to work closely together and share control of operations, and how robustly we can measure and predict human motions and intentions in pHRI operations. In this paper, we propose to use a set of wireless inertial motion sensors fixed to the body of the human partner to track and estimate human motions, and to use the interaction contact between the robot and the human, as detected by a force/torque (FT) sensor, as an interaction velocity update (IVU) to estimate and reduce drift in the position/orientation estimates. Our hypothesis is that human motion estimates from inertial sensors with an IVU will give sufficiently accurate and robust motion information for safe cooperative pHRI operations.
Behavioral dynamics models provide an observationally grounded basis for HRI algorithms and provide another tool for creating robust, natural, and interpretable HRI systems. Here, an HRI pick-and-place algorithm was implemented based on a behavioral dynamics model of human decision-making dynamics in an interpersonal pick-and-place task. Participants were able to complete the HRI pick-and-place task, we provide comparisons to HHI pick-and-place results.
Computer vision techniques that can anticipate people»s actions ahead of time could create more responsive and natural human-robot interaction systems. In this paper, we present a new human gesture forecasting framework for human-drone interaction. Our primary motivation is that despite growing interest in early recognition, little work has tried to understand how people experience these early recognition-based systems, and our human-drone forecasting framework will serve as a basis for conducting this human subjects research in future studies. We also introduce a new dataset with 22 videos of two human-drone interaction scenarios, and use it to test our gesture forecasting approach. Finally, we suggest follow-up procedures to investigate people»s experience in interacting with these early recognition-enabled systems.
The purpose of this study is to examine people's expectations and preferences for a robot's personality based on the tasks the robot is performing. We conducted an interview followed by a survey. In the semi-structured interview, we classified four categories of tasks expected to be completed by robots: social, office, service, and physical. Based on these results, we conducted a survey of 381 participants to examine which types of personality people expect robots to display depending on the task the robot performs. Depending on the tasks, the personalities of extraversion, conscientiousness, and openness showed significantly different values. The results imply that robots? personal traits need to be designed differently based on different task types. With our study's results, robot designers can better develop the social and emotional aspects of robots. Such improvements would result in better communication between users and robots, and in the users? perceptions of their needs being satisfied.
This paper presents a novel acceptability study of a tele-assistive robotic nurse for human-robot collaboration in medical environments. We designed a telepresence robotic platform with multi-modal interaction framework, which provides haptic tele-operative control, gesture control, and voice command for the robotic system to achieve a mobile manipulative goal. We then performed a pilot user study to experience the robotic control scenarios and analyze the acceptability of our robotic system in terms of usability and safety criteria. This paper then presents the analysis results on surveys from 11 participants and concludes that the gesture-based control is the most difficult for users and the voice-based control is easier while safety is not assured.
This late breaking report introduces an approach to measure yawning contagion between robots and humans. Understanding to what extent yawning can be contagious between robots and humans will help to generate more believable interaction behaviors for social robots and contribute to a better understanding of cognitive phenomena like empathy and their application in HRI. We will give an overview of an experiment which used an EMYS robot for the presentation of the yawning stimulus. We will present the results of our preliminary analysis of the collected data.
Softbank»s Pepper robot recently gained massive traction in diverse domains. On the one hand, the robot interacts with potential customers in shopping malls, stores, at trade fairs and various social events serving as a concierge or "Pepper-as-Promoter", grabbing attention and fostering customer engagement. On the other hand, the RoboCup federation opened up a completely new league in 2017: the Social Standard Platform League (SSPL). In this new league, the Pepper was chosen as the standard social platform that teams will rely on in competitions in the years to come. Lastly, Pepper is an attractive platform for academic institutions since it is, in contrast to other platforms, relatively low priced and does not require a high degree of maintenance or prior knowledge with respect to, e.g., mechanical engineering.
However, designing, developing and implementing social skills for a humanoid robot is not a trivial task that is additionally subject to constant change in the robots code base and configuration parameters for instance. Thus, one of the major drawbacks of the Pepper platform is the lack of a proper simulation environment in order to test new algorithms, high-level task execution strategies, regression testing or simply to provide an additional robot "instance»» to compensate peaks in utilization. In this contribution we present our work towards such a simulation environment. We focus on two major topics a) seamless integration with the robot»s ecosystem, e.g., NAOqi and ROS b) basic human-robot-interaction capabilities that can foster behavior modeling and functional regression testing.
Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.
Time delay is widely recognized as a challenge in teleoperation of unmanned ground vehicles, as it often compromises teleoperation task performance. In response to this challenge, various types of delay compensation algorithms have been developed. These algorithms have enhanced the teleoperation performance in terms of route following accuracy and task completion time. However, little is known about how delay compensation algorithms affect human operators» workload. This study, therefore, aimed to assess how one delay compensation algorithm, the model-free predictor, affects operators» workload and teleoperation performance. A dual-task driving simulator was utilized in the present study, where participants drove a High Mobility Multipurpose Wheeled Vehicle (HMMWV) while performing a 1-back memory task. Preliminary results revealed that the delay compensation aid can reduce operators» workload while enhancing primary task performance.
Reproducible experiments are a major requirement for transparent, comparable and verifiable results in the field of human-robot interaction (HRI). Furthermore, a version-controlled and well-structured "ready to deploy" system setup of soft- and hard-ware for an HRI experiment opens up a range of innovative possibilities for interdisciplinary efforts as well as simplified participation of collaborators in the research community. However, making experiments reproducible is not a trivial task. It stems from the lack of agreed upon methodologies, tools and the inherent technical complexity. In this work we present our latest efforts in the context of an international and interdisciplinary research project to enable robotics researchers, software engineers, and social scientists to work together to reproduce a behavioral HRI experiment. The successful reproduction demonstrates that our tool chain approach meets the proposed requirements of the reproducibility problem. To the best of our knowledge, this is the first time an integrated systemic approach allowed an identical instantiation of a complete HRI experiment at geographically distributed locations.
Maintaining the safety of humans is of paramount concern in the field of human-robot interaction. We employed a Research through Design (RtD) approach to explore better HRI safety mechanisms. We conducted a preliminary design study where we presented a group of designers various scenarios of different robotic platforms acting unsafely. Our findings indicate that participants mapped human responses to unsafe robotic interfaces, to natural human defensive behaviors in response to varying levels of threat stimuli. Based on preliminary findings, we suggest leveraging the instinctive human ability to react to dangerous situations as a fail-safe mechanism to the robot»s own built-in safety methods.
One of the fundamental aspects of social interaction is persuasion. It is a useful feature for a social robot to incorporate in its social behavior. Persuasiveness in human-robot interaction can be influenced by a number of factors including robot»s appearance and behavior. In this study, the effect of robot gender was investigated. The experiment was conducted in the university with a humanoid robot NAO interacting with local Kazakhstani citizens and foreign participants from Asia, Europe and North America. Robot introduced him/herself, made a persuasive appeal and offered to make a donation. During the experiment, robot»s gender was varied by changing its synthesized voice and name. The findings imply that the gender of robot has influenced participants» preferences in making donation. Specifically, female robot received more donations from female and male participants in comparison to a male robot. In addition, foreigners donated more money than locals. The overall results suggest that HRI designers could consider manipulating the robot»s gender when designing robots for persuasive applications.
Personal drones are becoming increasingly present in our urban environments and everyday lives. While primarily being used for entertainment, agriculture, delivery, and filming, drones are becoming more and more autonomous. In time, personal drones will become entities that can be collocated with users, and eventually even play a role as social collaborators. The proposed demonstration suggests a mapping function for human interpretable drone motions corresponding to five human emotional states (i.e anger, happiness, sadness, surprise, and fear), using the personal drone's movements (i.e, changing speed or rotations) instead of anthropomorphizing it.
With the anticipated increase of robotic ground vehicles in military operations, it is important to develop human machine interfaces (HMIs) to control vehicles that accommodate the cognitive capacities of military personnel and support effective utilization. In an initial investigation of cognitive implications of robotic ground vehicle use, we measured trust, workload, and performance to quantify the impact of transitions between modes of operation (supervisory control and tele-operation) on human performance. Trust increased after scenario completion and reported workload was low-moderate. Performing a transition impacted cognitive performance. Detection of targets was higher when targets were placed 'on path'. The results suggest that there are cognitive implications for HMIs and that icon placement at locations where people look naturally during a task might improve performance.
It is desirable that social robots are able to communicate a range of emotional states in a manner that is universally understood by people. It is proposed that the Geneva Emotion Wheel (GEW) has great potential for use as a tool for evaluating the ability of robots to express affective content. Some of the factors that make the GEW advantageous over existing methods include: a reduction in the importance of word labels, incorporation of emotional intensity, and coverage of 'no emotion' states. To support future research using the GEW, a software suite developed for conducting robotics experiments using the GEW has been made available as an opensource tool to the community.
Research suggests, interpersonal competences such as having a sense of humor can help establish sociality in human-robot interaction. This study tested the effect of different types of jokes told by either a human or a robot (NAO) on the perceived intelligence and liking of the narrator. Results of a mixed-design ANOVA showed that only clever jokes could increase the attribution of intelligence to a robot. No significant differences were found for different types of jokes on liking the robot.
During the last few years, we have observed a rapid increase in the number of robots that are manufactured and marketed for our homes. These social robots promise to become not just a personal assistant, but a companion that knows its owner»s tastes and habits. As the advances in artificial intelligence move the dream of home robots closer, exploitation techniques that could specifically target these robots need to be thoroughly examined. In this paper, we present our observations from performing an initial vulnerability analysis to a commercial social robot. Our results indicate that both manufacturers and application developers need to take cybersecurity into account when considering the use of their robots.
Human-robot teaming can be improved if a robot»s actions meet human users» expectations. The goal of this research is to determine what variations of robot actions in response to natural language match human judges» expectations in a series of tasks. We conducted a study with 21 volunteers that analyzed how a virtual robot behaved when executing eight navigation instructions from a corpus of human-robot dialogue. Initial findings suggest that movement more accurately meets human expectation when the robot (1) navigates with an awareness of its environment and (2) demonstrates a sense of self-safety.
The goal of this research is to develop a self-training system of brushwork of calligraphy. In order to draw a well-shaped character, it is required to move the brush properly. In the developed system, the student's brushwork is measured by Leapmotion sensor, and if the handwriting is not proper, the student's wrist is stimulated. Not only when the brush goes out from the reference trajectory but also when the brush is expected to overrun the proper end-position during one horizontal stroke, the pressure presentation device stimulates the student's wrist as an instruction of its handwriting. Experiments with three subjects were carried out, and it was confirmed that the developed system became a more effective device for self-training instruction of calligraphy.
This paper presents our design of an autonomous navigation system for a mobile robot that guides people who are blind and low vision in indoor settings. It begins by presenting user studies that shaped our design of the system, moves on to describing our model of human-robot coupled motion, and concludes by describing our autonomous navigation system.
This paper presents a pilot study in which we examine the interactions between human-robot teammate trust, cognitive load, and robot intelligence levels. In particular, we attempt to assess these interactions during a competitive game of capture the flag played between a human and a robot. We present results while the human plays against robots of different intelligence levels and determines their level of trust of each robot as a potential teammate through a post experiment questionnaire. We also present our exploration of heart rate measures as approximations of cognitive load. It is our goal to determine guidelines for future autonomy and interaction designers such that their systems will reduce cognitive load and increase the level of trust in robot teammates. This is an initial experiment that uses the least amount of vehicles yet still gathers competitive data on the water. Future experiments will increase in complexity to many opponents and many teammates.
Trust plays an essential role in ensuring safe and robust human-robot interaction. Recent work suggests that people can be too trusting of technology, leading to potential dangerous situations. We carried out a series of experiments in an autonomous car simulator, in order to test if there is a difference in people»s behavior when real-life consequences are applied, compared to pure simulation. The study was carried out with six experimental conditions in a between-subject design in which participants (N = 121) interacted with the simulator and were told they could assume control of the autonomous car at any point during the simulation. Results show that participants are significantly less trusting of the autonomous system, when real-life consequences were involved (p = .014).
Human beings can use thermal sensation to interpret human emotions. Hence, it is possible to study and analyze the use of thermal sensation as a medium to transmit emotions from a robot to a human. However, there are no previous studies investigating the correlation between the robot's body temperature and its emotion, and its effect on the human's perception. Therefore, in this study, we propose the design of a robot that uses its body temperature to communicate its emotional state.
Since the autism spectrum disorder (ASD) diagnosis relies solely on behavioral observations by experienced clinicians, we investigate whether parts of this job can be autonomously performed by a humanoid robot using sensors available on-board. We have developed a robot-assisted ASD diagnostic protocol and propose the Partially observable Markov decision process (POMDP) framework for such protocol to enable the robot to make decisions about next actions. To obtain some of the parameters in our POMDP models we survey experienced ASD clinicians and encode their knowledge of children»s reactions in observation probabilities of POMDP models. We evaluate our approach in a small study with four children (two children with ASD, two typically developing children).
Developing countries in South Asia face unique economical and societal challenges such as poor infrastructure, low literacy and gender specific social norms. A particular concern is low participation of females in mixed gender activities. Any technological tool developed to improve accessibility of the population to the government and financial infrastructure needs to tackle these issues. In order to investigate design principles concerning these challenges, we are developing a culturally aware robot receptionist called ROSS (Robot Oriented Support Staff).The paper reports three studies on ROSS. We report our initial design effort, user interactions, strategies to engage low-literate users and female users, implementation decisions to deal with lack of services like internet connectivity, duration of interactions, rate of understanding, and rate of task completion. Findings from user experience survey indicate that culturally aware design can increase acceptability and interaction time. In the end we discuss possible future design changes in ROSS.
Anticipation is an essential ability for any system designed for human robot interaction. As human activities are complex, the robot/machine should be capable of processing long time-series observations to understand them. These observations are normally high dimensional, corrupted, noisy, have a high frequency, and have very long temporal relationships. In this paper a new deep learning model architecture is proposed to anticipate maneuvers. The source of the sensory data in this domain could be GPS location, car»s speed, inside and outside cameras as well as other car related sensors. In our proposed model, we use pairs of max-pooling and convolutions to represent spatial dependencies in the video frames and apply dilated convolutions to understand temporal relationships for maneuver anticipation. We also show that the performance of our proposed approach could compete with other well-known machine learning architectures in this domain.
Mobile robots start to appear in our everyday life, e.g., in shopping malls, airports, nursing homes or warehouses. Often, these robots are operated by non-technical staff with no prior experience/education in robotics. Additionally, as with all new technology, there is certain reservedness when it comes to accepting robots in our personal space. In this work, we propose making use of state-of-the-art Mixed Reality (MR) technology to facilitate acceptance and interaction with mobile robots. By integrating a Microsoft HoloLens into the robot»s operating space, the MR device can be used to a) visualize the robot»s behavior-state and sensor data, b) visually notify the user about planned/future behavior and possible problems/obstacles of the robot, and c) to actively use the device as an additional external sensor source. Moreover, by using the HoloLens, users can operate and interact with the robot without being close to it, as the robot is able to sense with the users» eyes.
Intelligent agents need to perceive and correctly interpret the social signals of their interaction partners. In order to support the development of these skills, we establish a process of long-term data acquisition, annotation and continuous model evaluation. We facilitate automatic recording and annotation of unconstrained, multicentric interactions in a smart environment. Finally, we simplify manual ground truth annotation and allow continuous evaluation of our recognition models on a growing set of interactions.
This paper explores the use of a social robot for one-on-one tutoring, in a study in which 15 children participated in four second-language tutoring sessions. Specifically, changes across sessions are measured on two dimensions: engagement and performance. Results have revealed a significant positive change in performance as well as a significant pattern in engagement across the interactions.
We present an experimental online study examining how participants (N=343) interpret affectively congruent and incongruent body movements/gestures and non-linguistic utterances (NLUs).
Social Assistive Robots may potentially distract the user from his/her current activity while performing non-interactive tasks. In this work, we consider a robotic system approaching an elder man, in order to monitor his/her behavior, as the human is occupied in carrying out a specific daily activity. We conducted a pilot study to assess the possibility of an automatic evaluation of the human awareness and how the robot presence interferes with his/her current activity. First conclusions have been obtained comparing questionnaire results with video analysis.
Robotic teaching has not received nearly as much research attention as robotic learning. In this research, we used the humanoid robot Baxter to provide feedback and positive reinforcement to human participants attempting to achieve a complex task. Our robot autonomously casts the teaching problem as one that invokes the exploration/exploitation tradeoff to understand the cognitive strategy of its human partner and develop an effective motivational approach. We compare our learned reinforcement model with a baseline non-reinforcement approach and with a random reinforcer.
Recent research has indicated that people engage, and unabashedly so, in the verbal abuse of female-gendered robots. To understand whether this also cuts across racial lines, and furthermore, whether it differs from objectifying treatment of actual people, we conducted a preliminary mixed-methods investigation of online commentary on videos of three such robots -- Bina48, Nadine, and YangYang -- contrasted with commentary on videos of three women with similar identity cues. Analysis of the frequency and nature of abusive commentary suggests that: (1) the verbal abuse of the Bina48 and YangYang (two robots racialized as Black and Asian respectively) is laced with both subtle and overt racism; (2) people more readily engage in the verbal abuse of humanlike robots (versus other people). Not only do these findings reflect a concerning phenomenon, consideration is warranted as to whether people»s engagement in abusive interactions with humanlike robots could impact their subsequent interactions with other people.
This study presents a Brain-Computer Interface (BCI) based remotepresence system for humanoid NAO robot. The system would beuseful for partially or fully paralyzed patients to interact with peo-ple and live active social life. A P300 based BCI is used to controlhigh-level desires of the humanoid and visual stimulus presenta-tion is implemented to control a humanoid robot. "Programmingby Demonstration (PbD)" methods are used to train the robot toperform human interactive actions and NAO robot native speller isused to talk with people. In general, the proposed solution combinestwo major techniques: Programming by demonstration and BCI. Anexperiment is designed and conducted on 5 different healthy sub-jects to perform a handshake, waving, walking and etc., verifyingthe applicability of the proposed approach.
This work demonstrates the adaptation of an industrial robotic system to an affordable and accessible open platform for education and research through rapid prototyping techniques. The ABB YuMi collaborative robot is controlled using a virtual reality teleoperation system and adapted using a low-cost gripper extension for surgical tools. The design and assessment of three surgical tools used in two mock surgical procedures are showcased in this paper. It was found that the perpendicular scalpel tool surpassed the others for performance time. Scissors were found more effective to cut the affected tissue in the melanoma extraction task than the parallel scalpel configuration (15% of healthy tissue removed versus 42%).
A lot of effort has been done in the development of hardware and software for functional social robots in daily environments. However, not enough exploration has been done in the adaptation of the current infrastructure for these robots to make them really useful for the users. In this paper, we present a proposal for the design of Robot Ergonomic environments shared by humans and robots. As future work we aim to design guidelines for interior designers, architects and robot designers to develop Robot Ergonomic spaces.
In order to establish social and bonding relationships with children, robots need to be able to adapt to a variety of users of different age and gender groups in order to keep them engaged and motivated. To this end, this research examines the responses of 107 children, ages 5 to 12, who interacted with humanoid robot NAO that communicated with synthesized female and male voices. Our results show young children (ages 5 to 8) were not able to successfully attribute gender to the robot in correspondence with the synthesized voice. In addition, we explicitly investigated children»s preferences for the robot»s gender: younger children indicated their preference for a robot with a matching gender while there was no difference in preferences for a robot»s gender by older children (ages 9 to 12).
Social groups have different rules and preferences for what they consider acceptable behavior and a social behavior that is favorable in a certain cultural context may be unacceptable in another. In this study, we evaluate the effects of robot communication style on how participants from two distinct cultures (Indian and American) perceive them; the robots use or violate cultural norms. We recruited participants from Amazon's Mechanical Turk to watch a short video of three humanoid robots interacting, and explore the impact of this difference on how participants perceive robot appropriateness for a range of tasks. Results indicate an association between participant culture and their preferred robot communication style for the task of older adult care.
Performing search and rescue missions in disaster-struck environments is challenging. Despite the advances in the robotic search phase of the rescue missions, few works have been focused on the physical casualty extraction phase. In this work, we propose a mobile rescue robot that is capable of performing a safe casualty extraction routine. To perform this routine, this robot adopts a loco-manipulation approach. We have designed and built a mobile rescue robot platform called ResQbot as a proof of concept of the proposed system. We have conducted preliminary experiments using a sensorised human-sized dummy as a victim, to confirm that the platform is capable of performing a safe casualty extraction procedure.
In this work, we study how humans transfer or generalize trust in robot capabilities across tasks, even with limited observations. We present results from a human-subjects experiment using a real-world Fetch robot performing household tasks. In summary, we find that human trust generalization is influenced by perceived task similarity, difficulty, and robot performance.
The purpose of this study is to determine levels of support for consideration of the rights of robots and to identify predictors of support for robot rights. Findings demonstrated that negative attitudes toward robots, perceived credibility of the petitioner, and prior interaction with robots were significant predictors of individuals agreeing to sign a petition on the issue of robot rights. Gender of the participant and whether the petitioner was a human being or Pepper robot did not significantly predict willingness to sign the petition.
Our research aims toward a method of evaluating how invasion of personal space by a robot, with appropriate social context, affects human comfort. We contribute an early design of a testbed to evaluate how comfort changes because of invasion of personal space by a robot during a collaborative task within a shared workspace. Our testbed allows the robot to reach into the human's personal space at different distances and urgency levels. We present a collaborative task testbed using a humanoid robot and future directions for this work.
A 2 (mediator: human, robot) x 3 (humor styles: affiliative, aggressive, self-defeating) factorial experiment was conducted to test participant perceptions of robot-enacted humor in conflict mitigation. Participants watched brief video vignettes of roommate conflict in which either a robot or human employed humor. Funniness ratings did not differ significantly between conditions of humor style or mediator. However, participants perceived affiliative and aggressive humor employed by a robot as less appropriate than when used by a human.
We describe the design and implementation of Blossom, a social robot that diverges from convention by being flexible both internally and externally. Internally, Blossom»s actuation mechanism is a compliant tensile structure supporting a free-floating platform. This design enables organic and lifelike movements with a small number of motors and simple software control. Blossom»s exterior is also flexible in that it is made of fabric and not rigidly attached to the mechanism, enabling it to freely shift and deform. The handcrafted nature of the shell allows Blossom to take on a variety of forms and appearances. Blossom is envisioned as an open platform for user-crafted human-robot interaction.
Collaborative robots require high intelligence in order to adapt to the widely diversified human partner's behaviors. We have proposed an interactive robot learning of collaborative actions from large datasets of human-robot interactions. This work aims to further our development to incorporate embodied behaviors in human-robot collaboration into the robot learning approach. In this preliminary work, we develop immersive user interfaces with virtual reality devices for the cyber-physical interaction between human and robot, in order to represent embodied collaborative behaviors in our experiment system. We have obtained the human's visual observation in the virtual world by tracking the movement of the head mounted device to determine the observed target, the verbal communication between agents from the spoken speech, and the agent's actions, i.e. the body movements from tracking the avatar's location in the virtual world to determine the traveled path.
Robots involved in HRI should be able to adapt to their partners by learning to select autonomously the behaviors that maximize the pleasantness of the interaction for them. To this aim, affect could play two important roles: serve as perceptual input to infer the emotional status and reactions of the human partner; and act as internal motivation system for the robot, supporting reasoning and action selection. In this perspective, we propose to develop an affect-based architecture for the humanoid robot iCub with the purpose of fully autonomous personalized HRI. This base framework can be generalized to fit many different contexts -social, educational, collaborative and assistive - allowing for natural, long-term, and adaptive interaction.
The majority of socially assistive robots interact with their users using multiple modalities. Multimodality is an important feature that can enable them to adapt to the user behavior and the environment. In this work, we propose a resource-based modality-selection algorithm that adjusts the use of the robot interaction modalities taking into account the available resources to keep the interaction with the user comfortable and safe. For example, the robot should not enter the board space while the user is occupying it, or speak while the user is speaking. We performed a pilot study in which the robot acted as a caregiver in cognitive training. We compared a system with the proposed algorithm to a baseline system that uses all modalities for all actions unconditionally. Results of the study suggest that a reduced complexity of interaction does not significantly affect the user experience, and may improve task performance.
This paper presents an exploratory research on the proxemics behavior occurring during child-robot interaction. Due to the increase in usage of NAO robots in child environments and applications, it is increasingly important to design an appropriate model for proximate interaction for children of different age groups. For this purpose we conducted an exploratory study »in the wild» inviting 26 children. Our main findings indicate that proximity depends on age. Future work is needed to understand the factors that influence child-robot proxemics.
Improper usages of crutches can cause a secondary accident like a falling. In this paper, an instruction robot for crutch walk training is introduced. This robot moves along a walking crutch user, and it measures his/her body parts motions. Based on the measurements, the robot provides advices to the crutch user for proper walk motions. As a result, the crutch user can review his/her own walk motions. It is expected that crutch walk training with this robot will improve the crutch user»s walk motions, and it will decrease the possibility of accidents.
This paper presents a preliminary study exploring playing strategy of a robot and the way it can affect a learning outcome of young adults. We developed a social robot that plays the role of a learning companion for a foreign language. We conducted a preliminary study where we explored two robot playing strategies, always-winning and always-losing, on the learning outcomes of the participant. In addition, we also wanted to find out whether the personality of participants might have an effect on their learning performance. This pilot study was conducted with 20 participants aged 17-25 years old. Our preliminary findings suggest that extroverts learn more when they win the robot while introverts learn more when the game ends in a draw.
Research on trust in human-human interaction has typically focused on notions of vulnerability, integrity, and exploitation whereas research on trust in human-machine interaction has typically focused on competence and reliability. In this initial study, we explore whether these different aspects of trust can be considered parts of a multidimensional conception and measure of trust. We gathered 62 words from dictionaries and trust literatures and asked participants to evaluate the words as belonging to a "personal" meaning or a "capacity" meaning. Through an iterative process using Principal Components Analysis (PCA) and item analysis, we derived four components that capture the multidimensional space occupied by the concept of trust. The resulting four components yield four subscales of trust with five items each and alpha reliabilities as follows: Capable = .88, Ethical = .87, Sincere = .84, and Reliable = .72.
Social robots gain increasing relevance in many domains with high potential for social good (e.g., healthcare, care for the elderly, robots as social companion). However, their design bears high responsibility and asks for a profound understanding of to what degree psychological mechanisms of human-human interaction (HHI) apply to human-robot interaction (HRI). The present experiment (N=40) explores this issue by replicating the famous social influence experiments by Asch [1], which showed that human judgment can be influenced by confederates against better knowledge. Our study replaced one of the confederates by a robot and revealed that a robot's influence can be even higher than that of other humans. Implications for future research are discussed.
Previous research has shown that the presence of a human peer during a learning task can positively affect learning outcomes. The current study aims to find out how second language (L2) vocabulary gains differ depending on whether children are learning by themselves, with a child peer, or with a robot peer. Children were administered an L2 vocabulary training in one of these three conditions. Children's word learning was measured directly after the training and one week later. Contrary to our expectations, children learning by themselves outperformed children in the peer conditions on one out of four word knowledge tasks. On the other tasks, there were no differences between the three conditions. Suggestions to further study the potential benefits of a robot peer are provided.
In this paper, we present our ongoing research on robots as a screening tool for potential cognitive impairment, a risk factor for dementia and other mental diseases. We implemented a psychometric test on a state-of-the-art social robot, realizing a cognitive assessment via Human-Robot Interaction (HRI), and we compared it to the traditional paper-and-pencil testing. Our goal was to test the feasibility of this procedure and collect information about novel technology applied in psychological assessment. Results suggest that the goal is achievable under professional supervision, but the technology needs further work and refinement for a fully autonomous assessment.
Wearable robots have become feasible with recent advances in actuators, electronics, and rapid prototyping. We report on the development of a supernumerary wearable robotic forearm for close-range collaborative activities. In order for this device to be effective in its role, both usability and technical challenges must be addressed. We report on studies identifying desirable usage contexts for such a device, followed by an analysis of the advantage it provides and the loads it exerts on the user. Finally, we discuss ongoing work on adapting path-planning, perception, and behavior models to suit this device»s novel interaction scheme.
The recent advances in machine learning have led to an escalation in media coverage on the topics of artificial intelligence and robotics. To understand the attitudes and apprehensions of the general public towards intelligent robots, we conducted a first pilot study where participants filled in a survey including the Negative Attitude towards Robots Scale. This work reports and discusses the results of this study with respect to culture and age.
While researchers expect it will be technologically possible for robots to be widely available in society in the near future, the public shows negative attitudes toward robots that may impede their acceptance. Intergroup contact theory shows that positive contact with an outgroup reduces prejudice and increases positive emotions towards that outgroup. This was applied to an interaction between a participant and a humanoid robot to determine if those who interacted directly with, including touching, the robot would perceive all robots in a more positive manner and be more willing to interact with them. Results indicated that contact with the robot, compared with the Control condition, produced a marginally higher willingness to interact with robots.
Human-robot collaboration provides a great solution to the complex hybrid assembly tasks of intelligent manufacturing. In order to augment and guarantee the task quality in the human-robot collaboration process, the collaboration efficiency, including time consumption and human efforts, should be considered in the robot action planning. In this study, we propose a novel and practical approach using cost functions for the robot to plan actions in human-robot collaboration to address this challenge. By this approach, the robot action planning can be dynamically optimized to determine assisted assembly steps in the human-robot co-assembly task. A preliminary experiment is conducted to evaluate the proposed approach. Experimental results suggest that the proposed approach successfully generates the optimal actions for the robot to improve the task efficiency in human-robot collaboration.
Enabling the robot to predict human intentions in human-robot collaborative hand-over tasks is a challenging but important issue to address. We develop a novel and effective teaching-learning-prediction (TLP) model for the robot to online learn from natural multi-modal human demonstrations during human-robot hand-overs and then predict human hand-over intentions using human wearable sensing information. The human could program the robot using partial demonstrations according to the updated tasks and his/her personal hand-over preferences, and the robot can online leverage its learned strategy to actively predict human hand-over intentions and assist the human in collaborative tasks. We evaluate the approach through experiments.
Recent work on natural language generation algorithms for human-robot interaction has not considered the ethical implications of such algorithms. In this work, we argue that simply by asking for clarification, a robot may unintentionally communicate that it would be willing to perform an unethical action, even if it has ethical programming that would prevent it from doing so. In doing so, the robot may not only miscommunicate its own ethical programming, but negatively influence the morality of its human teammates.
We describe a behavioral navigation approach that leverages the rich semantic structure of human environments to enable robots to navigate without an explicit geometric representation of the world. Based on this approach, we then present our efforts to allow robots to follow navigation instructions in natural language. With our proof-of-concept implementation, we were able to translate natural language navigation commands into a sequence of behaviors that could then be executed by a robot to reach a desired goal.
This paper proposes a computational framework to estimate surgeon attributes during Robot-Assisted Surgery (RAS). The three investigated attributes are workload, performance, and expertise levels. The framework leverages multimodal sensing and joint estimation and was evaluated with twelve surgeons operating on the da Vinci Skills Simulator. The multimodal signals include heart rate variability, wrist motion, electrodermal, electromyography, and electroencephalogram activity. The proposed framework reached an average estimation error of 11.05%, and jointly inferring surgeon attributes reduced estimation errors by 10.02%.
Despite the increase of research and applications for child-robot interactions, little is known about children»s responses to a robot»s gender across age and gender groups. This paper presents an exploratory study intended to examine whether the perceived robot»s gender affects the interaction between a child and a robot. This paper details the preliminary results of an observational child-centered study with a gendered robot that was conducted in the hall of a residential facility with 24 children. Our findings suggest that children liked playing with the robot of the same gender as them.
A declared aim in prosthetic research is enabling physical and psychological integration of robotic limbs into a user»s body schema. The rubber hand illusion (RHI) gives insight into the impact of sensory feedback and its multisensory integration on embodiment which can be used in wearable robot design, e. g. prosthetic hands. This paper presents a pilot study using a robotic hand and shows its ability to induce a robotic hand illusion (RobHI). Two activity levels were tested and found to be sucessful in eliciting an illusion. Future research will concentrate on the impact of participants» control of the robotic hand and reception of sensory feedback on the elicitation of an illusion and subsequently on embodiment. Further research plans are envisioned to profit from the advances that have been made in human-computer interaction concerning users» needs and their importance in satisfying interaction scenarios.
Robust autonomy can be achieved with learning frameworks that refine robot operating procedures through guidance from human domain experts. This work explores three capabilities required to implement efficient learning for robust autonomy: (1) identifying when to garner human input during task execution, (2) using active learning to curate what guidance is received, and (3) evaluating the tradeoff between operator availability and guidance fidelity when deciding who to enlist for guidance. We present results from completed work on interruptibility classification of collocated people that can be used to help in evaluating the tradeoff in (3).
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, we want to enable robots to safely hug humans. This research strives to create and study a high fidelity robotic system that provides emotional support to people through hugs. This paper outlines our previous work evaluating human responses to a prototype»s physical and behavioral characteristics, and then it lays out our ongoing and future work.
Social robots are increasingly applied in assistive settings where they interact with human users to support them in their daily life. There, abilities for a robust and reliable social interaction are required, especially for robots that interact autonomously with humans. Apart from challenges regarding safety and trust, the complexity and difficulty of attaining mutual understanding, engagement or assistance in social interactions that comprise spoken languages and non-verbal behaviors need to be taken into account. In addition, different users or user groups have inter-individual differences with respect to their personal preferences, skills and limitations. This makes it more difficult to develop reliable and understandable robots that work well in different situations or for different users.
This research investigates the evaluation of a key component in human-robot interaction (HRI), the psychological and physical stress of humans interacting with a robot. This research for understanding the role of stress in human-robot teaming will consist of two phases. The first phase of this research evaluates different methods of inducing stress in unstructured and dynamic scenarios and provides a comparison to traditional stress induction methods such as showing a video or solving math problems without the use of paper and pencil. The second phase of this research includes a study of human-robot interactions using a human to operate a robot with and without induced stress. It is expected that improved usability and user experience for the robot operator can be achieved through this research and an understanding of the stress levels a robot operator experiences in unstructured and dynamic scenarios, such as those experienced by tactical officers. By understanding how stress impacts interactions under different modes of robot operation, methods can be developed to reduce operator stress by designing different modes of robot operation that be used during human-robot teaming in dynamic and unstructured scenarios.
Smart Walkers are robotic devices used to improve physical stability and sensorial support for visually impaired people with lower limb weakness or poor balance. Such devices offer support for people with cognitive disabilities and who cannot safely use conventional walkers. This work proposes an admittance controller to guide visually impaired people. Physical interaction between user and Smart Walker is used to haptically indicate the path to be followed. The user experience is evaluated through variation of control parameters and the predetermined path, which are used to improve the Human-Robot-Environment interaction.
Robots and other artificial agents are increasingly being considered in domains involving complex decision-making and interaction with humans. These agents must adhere to human moral social norms: agents that fail to do so will be at best unpopular, and at worst dangerous. Artificial agents should have the ability to learn (both from natural language instruction and from observing other agents? behavior) and obey multiple, potentially conflicting norms
Imagine you are assembling a table with a robot and it asks you to pass the table leg. Unsure of which leg to pass, you ask to clarify but do not receive a response and are left confused. As robots with diverse skills and complex learning algorithms emerge, an important prerequisite of seamless human-robot collaboration will be for humans to understand what robots can and cannot do, and why. Robots, in turn, need to understand how their actions affect humans perceptions of their capability (e.g. asking a question may unintentionally imply that the robot can also understand speech). Robots also need to generate behavior that humans can easily understand. A mismatch in expectations of capability can lead to inefficient teamwork and weakened trust. To this end, I have been motivated by how robots can accurately set expectations of their capabilities. In particular, I have focused on how robots can communicate incapability. In Kwon et al., we presented an optimization-based method to automatically generate trajectories that express a robot»s incapability.
Pediatric oncology patients could benefit from bonding with a social robot and talking about their day in the hospital. With our research we aim to contribute to the development of a robot that is able to facilitate a child-robot bond autonomously and long-term. We propose to use robot-disclosure and a shared interaction history to create a child-robot bond where the child feels comfortable and familiar enough to talk about their day with the robot.
Although most social robots introduced in the consumer market devoid of personality, research in HRI shows that designing personality-like behavior for robots could greatly benefit future interactions [2]. This work suggests the relationship between sidekick characters and protagonists, frequently found in media narratives, as a metaphor for designing robot behavior and personality. Findings from a content analysis study that examined fifteen characters from popular books and movies suggest three guidelines for designing personal robots: Reciprocity, Affirmation and Independence.
Current vehicle-pedestrian interactions involve the vehicle communicating cues through its physical movement and through nonverbal cues from the driver. Our work studies vehicle-pedestrian interactions at a crosswalk in the presence of autonomous vehicles (without a driver) facilitated by the deployment of interfaces intended to replace missing driver cues. We created four prototype interfaces based on different modalities (such as visual, auditory, and physical) and locations (on the vehicle, on street infrastructure, on the pedestrian, or on a combination of the vehicle, street infrastructure, and the pedestrian). Our findings from two user studies indicate that interfaces which communicate awareness and intent can help pedestrians attempting to cross. We also find that interfaces are not limited to existing only on the vehicle.
Autonomous robots in the home and on the road are fundamentally changing the way we live and interact. The visual expressions and interactions of these devices are well studied; however, more could be done to learn how sound could be a deliberate (or sometimes accidental) channel of communication from autonomous systems to humans. Combining engineering design, music, acoustics, and psychology, my thesis aims first to identify how sound colors human-robot interactions, and second to design acoustic guidelines that can improve trust of autonomous systems. As case studies, I plan to evaluate real-life interactions at two scales: sidewalk robot-pedestrian interactions and autonomous vehicle-pedestrian interactions---seeing autonomous cars as large robots that we sit inside of. This work will produce a generalizable methodology that refines interfaces between humans and technology.
Social robots may make use of social abilities such as persuasion, commanding obedience, and lying. Meanwhile, the field of computer security and privacy has shown that these interpersonal skills can be applied by humans to perform social engineering attacks. Social engineering attacks are the deliberate application of manipulative social skills by an individual in an attempt to achieve a goal by convincing others to do or say things that may or may not be in their best interests. In our work we argue that robot social engineering attacks are already possible and that defenses should be developed to protect against these attacks. We do this by defining what a robot social engineer is, outlining how previous research has demonstrated robot social engineering, and discussing the risks that can accompany robot social engineering attacks.
My research studies the relation between trust and conformity with a group of robots and what specific aspects of robots causes people to trust in them. An experimental setup was devised which allows us to measure if a group of robots can cause conformity in participants. In our setup, participants play a game in which they give a preliminary and a final answer. We measure how often participants change their answer to match the answer of the group of robots. Results in preliminary studies show that robots are capable of causing conformity and that trust in the robots plays a role in deciding to conform. Future research will study different factors (for example the type of robots or the size of group) that play a role in deciding whether to trust and therefore conform to a group of robots.
Empirical evidence has demonstrated that learning with and from a physically present, interactive robot can be more effective than learning from classical on-screen media [6, 8]. In the L2TOR project wework on using the robot Nao to support second language learning, a problem that becomes increasingly important nowadays.We focus on preschool children in the age of 5-6yr, for whom it is crucial to develop adequate knowledge of the academic language as later educational success builds on it [4, 7].
While the cost of creating robots is declining, deploying them in industry remains expensive. Widespread use of robots, particularly in smaller industries, is more easily realized if robot programming is accessible to non-programmers. Our research explores techniques to lower the barrier to robot programming. In one such attempt, we propose situated tangible robot programming to program a robot by placing specially designed tangible blocks in its workspace. These blocks are used for annotating objects, locations, or regions, and specifying actions and their ordering. The robot compiles a program by detecting blocks and objects in the environment and grouping them into instructions by solving constraints. We designed a preliminary tangible language and blocks and evaluated the intuitiveness and learnability of the approach. Our user studies provide evidence for the promise of situated tangible programming and identify the challenges to address. In addition to improving the block design and extending the language, we are planning to integrate tangible programming into a holistic ecosystem of a programming environment in future.
In human-human interactions, individuals naturally achieve fluency by anticipating the partner»s actions. This predictive ability is largely lacking in robots, leading to stilted human-robot interactions. We aim to improve fluency in human-robot reaching motions using a unified predictive model of human reaching motions. Using this model, we allow the robot to infer human intent, while also applying the same model to generate the robot»s motion to make its intent more transparent to the human. We conducted a study on human reaching motion and constructed an elliptical motion model that is shown to yield a good fit to empirical data. In future studies, we plan to confirm the effectiveness of this model in predicting human intent and conveying robot intent for achieving fluency in human-robot handovers.
The use of touch in human to human relationships is an important one, as both an emotive and communicative gesture. For a simple rolling robot, touch is accomplished via collision. This work proposes a combination of soft robotics and social robotics to explore the social consequences of intentional collision. It introduces a blowfish-inspired soft robot, which can use retractable silicone spikes for both actuation and social expression.
Human-robot trust is key for effective human-robot interactions. Prior research has explored human-automation and human-robot trust, but further research is still needed. Previously, I investigated human-robot trust between older adults and a care-provider both human and robot which showed that three main factors that influenced trust in the home care provider context were professional skills, personal traits, and communication. One question this study lead to was how do people group and categorize technologies, such as a robot care provider. Current research seeks to explore the dimensions on which people group technologies on by using multidimensional scaling.
The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.
We describe a novel wearable robotic arm intended for close-range collaborative activities. Results from a user-centered iterative design procedure were applied to the development of a prototype, which was then evaluated in terms of workspace volume and loads on the user. Ongoing and future work involves evaluating the human-robot interaction with an autonomous version of the robot in specific scenarios of collaborative assembly and sorting.
Varying factors prevent loved ones from being together physically. There are existing means of facilitating conversation, be it through text voice, video chat or telepresence products. However, current solutions do not have the portability and personification necessary to allow for shared activities with one another and a sense of presence. We present the design for ParticiPod, a portable device to enable telepresence in a more tangible way. We have identified a potential gap in research and present a physical prototype that demonstrates key features. We also detail the ways in which ParticiPod contributes to social good.
In this paper, a system is proposed wherein a network of robots, each termed a 'BeachBot,' would be used to collect waste and litter that is found on beaches, coastal locations, and out into the oceans. The system takes the path of remote human-robot interaction, where each BeachBot would be teleoperated by a human operator through a virtual reality interface. That user would be crowdsourced online, with registered individuals being able to log-in and be assigned a robot. The proposed system is first tested through an online virtual reality game that simulates the implementation of such a system.
One of the functions of creativity is to improve society. Creativity is a skill that can be trained and has proven benefits for the professional and personal development of individuals. Yet, a paradox exists: despite seeking individuals with a greater creative potential, society lacks systems that nurture the development of this skill. Technological advances arrive with the potential to develop solutions that support the development of creative skills. In this proposal, we introduce YOLO, a social robot that acts as a tool for developing creativity in children. YOLO resembles a robotic toy with a life of its own, developed specially for children, and envisioned to be used during playtime. YOLO can boost new ideas for children»s invented stories, by making use of minimalist behaviors that are meant for creativity expansion and social connectedness. We present the design and fabrication processes of YOLO, including examples of potential scenarios of use. With YOLO we aim to demonstrate a potential scenario in which autonomous robots can be used to promote social good.
This paper explores the design of a robot and interaction model that enables a robot to engage in human-like financial transactions, and to enter into agreements with a human counterpart. More explicitly, (1) we bestow the agent with a cryptocurrency wallet and (2) define bilateral and multilateral agreements that can be automated as smart contracts in a distributed ledger. As a use case of a robot with such features, we describe roBU - a traveling robot, that can enter into financial agreements in exchange for assistance in traveling the world. With this effort, we expect to validate the idea of near-future scenarios where autonomous or semi-autonomous agents are endowed with, a type of, social autonomy and the ability to engage in financial transactions. We believe the latter can improve task completion and enable further exploration of robot-human relationships, dependencies and trust.
Nemsu is a device that seeks to prevent and slow down the deterioration of the hippocampus in a playful and non-invasive manner focusing its functions on dealing 4 causes related to the speed of such deterioration: low mental activity which is stimulated by the association of daily tasks to melodies and basic memory games the second is sleep apnea which is improved by the use of reminders and melodies to help reconcile sleep, the third cause is related to Generalized Anxiety Disorder (GAD) which improves with the stimulation of the senses through contact with different sounds and textures and finally the fourth cause is related to the toxicity of the environment for which Nemsu notifies the user when toxicity levels are high.
When young children create, they are exploring their emerging skills by engaging with the world. When young children reflect, they are deepening their learning experiences by extracting insights. And when young children share, they are strengthening their community by creating opportunities for collaboration. To support such constructionist learning, we present MindScribe, an affordable interactive robotic object that engages preliterate children in reflective inquiry. As children create artifacts, MindScribe invites them to "tell a story" about their creation---in any language. And through scaffolded questioning, MindScribe elicits children»s insights into their creative discoveries. Together, they spark child-led communication and innovation in early learning communities.
With children in low-income backgrounds often struggling more academically than children in a financially higher bracket, musical training, such as learning piano, can be implemented to bridge this gap. Although non-profit organizations can offer free piano classes for children from low-income families, musical instruments are often unaffordable for them to continue practicing. We designed an inexpensive robot, Itchy, Scratchy, for children to practice piano in an out-of-class context. Our prototype evaluation indicated that Itchy, Scratchy could act as an economical, effective, and engaging musical instrument, which is important for closing the gap in musical education achievement between different socioeconomic backgrounds.
The social, educational, economic and health benefits from community gardens often stay limited to the gardeners. 'We' is a community-oriented robotic system designed to extend such benefits to the public and community. It consists of 1. 'We-Sense,' garden sensors that encourage community participation, and 2. 'We-Grow,' a responsive installation that reflects the state of and allows to explore the garden and its social life. 'We' aims to provide awareness about the community gardens in public spaces, to increase public literacy about food and community gardening, provide out-of-class education, encourage participation in citizen science and increase community engagement.
Building robots which can interact with the humans on a daily basis are active research field and researchers from around the world are brainstorming on it. In this paper, I have introduced a robot which can interact with humans when it is a life-threating situation. A situation where any technology, any human power becomes impractical although we live in the era where we are exploring beyond our solar system. This robot doesn't know how to smile but can give smile to someone for his entire life. Finally, illustration and renders of this robotics machine with soul will be presented.
We designed "Shelly", a robot interacting with children while restraining children»s robot abusing behaviors. Shelly has a tortoise-like friendly appearance and touch based simple and versatile interface, which encourages children to interact with the robot spontaneously in environments such as amusement park or kindergarten. We have developed two prototypes and proved validity of Shelly»s social concepts - one-to-many interaction and restraining robot abusing - through field tests. Ultimately, Shelly»s novel interface and interaction model targeted for multiple-children interaction would effectively attribute to various social goods.
We present a simple robotic tutor designed to help raise handwriting competency in school-aged children. "LetterMoose" shows the steps in how a letter is formed by writing on regular piece of paper. The child is invited to imitate LetterMoose and to scan its own letters using LetterMoose in order to get evaluative feedback (both qualitative and quantitative). We propose that LetterMoose might be particularly useful for helping children with autism attain handwriting competency, as children in this group are more likely to suffer from writing difficulties and may uniquely benefit from interacting with robot technology.
Project VOCOWA aims to deliver an automated solution for assistive technologies/ advanced smart assistance which will enable elderly and disabled people to live an independent and dignified life. VOCOWA can be implemented using three major technologies i.e., Simultaneous Localization And Mapping (SLAM), Deep Learning and Natural Language Processing to achieve its objectives. As of now the SLAM technology has been implemented successfully.
Children born with cleft lip and palate (CL/P) disorder go through several years of corrective surgeries, dental procedures, and jaw correction. These children suffer from varying degrees of speech impediment. Current treatments include speech therapy, but such treatments are not equally accessible for children from different socio-economic backgrounds. When available, speech therapists find it difficult to evaluate the children's progress in articulation over time. Buddy is a therapy robot for CL/P children that can provide contextual speech therapy in a gamified story-building format. To improve articulation, Buddy will provide robust visual feedback during these sessions and guide the children in enunciating tough words while collecting data of speech pattern for further analysis by the speech therapist.
Obesity causes physical and mental health problems (e.g., high blood pressure, diabetes, and depression). Currently, 1 in 3 American children are overweight or obese, and the prevalence of this issue is increasing. Disproportionately affected are children from low-income families. As unhealthy eating habits are the primary driver of this epidemic, we designed Health-e-Eater - a system for encouraging children aged 2-5 from low-income families to eat healthy food, leading to better nutritional intake and the development of healthy eating habits. Health-e-Eater is a low-cost system consisting of a magic plate and a robotic companion, which motivates and educates during dinnertime.
Galef is a loyal guide to a healthier life. It helps the users to be healthier by reminding them to take their recommended daily ration of water, monitor the sugar levels in the drinks they put in it and measure their body composition regularly in order to improve health. Galef also has three accessories: the first is a water filter and the other two are aimed for people who would prefer water flavored naturally instead of alone; one is a container for infusions and the other is a fruit crusher.
The refugee crisis is one of society's leading challenges. After a journey for survival, refugees and host institutions face barriers that hinder the integration process. To design solutions, we interviewed two groups: host institutions and past refugees. We identified critical issues, from legal concerns, like unfamiliarity of their Refugee Status, to grocery shopping. Our envisioned solution is GeeBot, a low-cost egg-shaped robot that institutions would lend to arriving families for eighteen months. GeeBot will be a translator with teaching functions, an information provider, and an active promoter of interaction between native and refugee populations.
Humans are dependent upon specific information to fulfill their needs, which suffices their social existence. Due to advancement in technology, humans try to satisfy their social needs through smart devices. However, these devices are not able to completely justify the human needs and humans strive for companionship to combat loneliness. We propose to design a robot (MAI) that can monitor, assist and interact with humans for their better livelihood.
Current assistive technologies need complicated, cumbersome, and expensive equipment, which are not user-friendly, not portable, and often require extensive fine motor control. Our approach aims at solving these problems by developing, a compact, non-obtrusive and ergonomic wearable device, to measure signals associated with human physiological gestures, and thereafter generate useful commands to interact with the environment. Our innovation uses machine learning and non- invasive biosensors on top of the ears to identify eye movements and facial expressions with over 95% accuracy. Users can control different applications, such as a robot, powered wheelchair, cell phone, smart home, or other Internet of Things (IoT) devices. Combined with VR headset and hand gesture recognition devices, user can use our technology to control a camera-mounted robot (e.g., telepresence robot, drones, or any robotic manipulator) to navigate around the environment in first-person's view simply by eye movements and facial expressions. It enables a human- intuitive way of interaction totally 'touch-free'. The experimental results show satisfactory performance in different applications, which can be a powerful tool to help disabled people interact with the environment and measure other physiological signals as a universal controller and health monitoring device.
Patient Reported Outcome Measures (PROMs) are a means of collecting information on the effectiveness of care delivered to patients as perceived by the patients themselves. A patient's pain level is a typical parameter only a patient him/herself can describe. It is an important measure for a person?s quality of life. When a patient stays in a Dutch hospital, nursing staff needs to ask a patient for its pain level at least three times a day. Due to their work pressure, this requirement is regularly not met. A social robot available as a bed side companion for a patient during his hospital stay, might be able to ask the patient's pain level regularly. The video shows that this innovation in PROM data acquisition is feasible in older persons.
We designed a Deep Q-Network (DQN) that learns to perform high-level reasoning in a Learning from Demonstration (LfD) domain involving the analysis of human responses. We test our system by having a NAO humanoid robot automatically deliver a behavioral intervention designed to teach social skills to individuals with Autism Spectrum Disorder (ASD). Our model extracts relevant features from the multi-modal input of tele-operated demonstrations in order to deliver the intervention correctly to novel participants.
Humanoid robots hold potential to offer customer experience for bricks-and-mortar stores. Participants took part in a quiz on the topic chocolate in an experimental field study at »The Belgian Chocolate House» in Brussels airport in which a tablet kiosk and a Pepper robot were compared. The experiments showed that offering the quiz on a humanoid robot provided better results in terms of shopper impressions and behavior.
We demonstrate an intuitive gesture-based interface for manually guiding a drone to land on a precise spot. Using unobtrusive wearable sensors, an operator can quickly and accurately maneuver and land the drone after very little training; a preliminary user study on 5 subjects shows that the system compares favorably with a traditional joystick interface.
In this video we introduce a unifying concept for robotics education that is already partially implemented as a pilot project in the German education system. We propose a network that connects various types of schools, universities and companies and introduces them to and trains them in state-of-the-art robotic platforms. Students, apprentices, technicians and engineers can profit from lectures covering the broad theory of robotics while being supported by a strong practical component. In the video we outline our goals as well as first results that underline the validity of the proposed approach.
To what extent do humans comfortably approach to hovering drones? In human-robot interaction, social proxemics is somewhat known. Han & Bae showed that students usually stand as far apart as the height of tele-robot teacher [1]. As commercial drone markets rise, the social proximity in human-drone interaction becomes an important issue. Researches on measuring the social proximity of interacting with drones are still in early stages. Jane showed that Chinese participants approach flying drone closer than American participants [2]. Abtahi experimented with an unsafe and a safe-to-touch drone, to check whether participants instinctively use touch for interacting with the safe-to-touch drones [3]. We aimed the first research on how people respond to the order to approach hovering drones which differs in size and flying altitudes under the conditions that a safety issue was secured enough. Two types of drones: small and big sized ones were prepared. Each drone flew 1.6m of eye level or 2.6m of overhead high. Total 32 participants with an average age of 22.64 were individually to stand 11.5 Feet away from hovering drones in 2x2 conditions: two sizes and two flying altitudes. Only one of participants experienced operating drones. A transparent safety panel was installed between hovering drone and participants. Each participant was technically allowed to move from the standing point 6.5 Feet away from a safety panel. A remote drone operator who controlled a hovering drone made a short conversation with a participant who stood behind a safety panel via a loud speaker system connected to a cellular phone in the experiment spot. After a participant recognized the drone as the extension of a remote operator, the participant was asked to move forward to hear a remote operator better. The experiment results showed that participants approached further when interacting with eye leveled drones compared with overhead drones. Flight altitude matters in social proximity of human-drone interaction with a significant level ?=0.2. Females moved closer to a big and eye-level drone. 31 participants entered into social space to interact with drones, and only one approached less than two Feet to be still in public space from drones. Gender and size of drone did not make significant differences in social proximity of human-drone interaction. This experiment has an evident limit of measuring the proxemics of participants under the cover of an acryl panel, which must have been installed for safety of any experiments of human-drone proximity. Nonetheless, the results imply that most South Korean participants might be ready to comfortably enter social space to interact with drones, and hovering drones in eye-level altitude seem to promote this attitude.
Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. This video documents a platform created to explore human perceptions of soft robots in interaction. The video presents select footage from an interaction experiment conducted with the platform and the initial findings obtained (also accepted to HRI'18 as a Late-Breaking Report).
Hearing-impaired communities around the world communicate via a sign language. The focus of this work is to develop an interpreting human-robot interaction system that could act as a sign language interpreter in public places. To this end, we utilize a number of technologies including depth cameras (a leap motion sensor and Microsoft Kinect), humanoid robots NAO and Pepper, and deep learning approaches for classification.
We developed an interactive humanoid robotic platform with a real-time face learning algorithm for user identification and an emotional episodic memory to combine emotional experiences with users so that the robot can differentiate its reactions to users according to the emotional history. In this video, it is demonstrated how a robot can develop a social relationship with humans through the face identification and emotional interaction.
The proposed demonstration suggests a mapping function for human interpretable drone motions corresponding to the five human emotional states (i.e anger, happiness, sadness, surprise, and fear), using the personal drone's movements (i.e, changing speed or rotations) instead of anthropomorphizing it.
1. Robots that are engineered and programmed to help in residential and/or hospital settings can encourage or discourage the acceptance of help by the look of its character design. 2. We interact with robotic technology in varying ways based on our perception. 3. Human/Robot interaction is essentially Human/Human interaction with the robot as a facilitator
Mobile robots start to appear in everyday life, e.g., in shopping malls or nursing homes. Often, they are operated by staff with no prior experience in robotics. False expectations regarding the capabilities of robots, however, may lead to disappointments and reservations when it comes to accepting robots in ones personal space. We make use of state-of-the-art Mixed Reality (MR) technology by integrating a Microsoft HoloLens into the robot»s operating space to facilitate acceptance and interaction. The MR device is used to increase situation awareness by (a) externalizing the robot»s behavior-state and sensor data and (b) projecting planned behavior. In addition to that, MR technology is (c) used as satellite sensing and display device for human-robot-communication.
We describe the design and implementation of Blossom, a social robot that diverges from convention by being flexible both internally and externally. Internally, Blossom»s actuation mechanism is a compliant tensile structure supporting a free-floating platform. This design enables organic and lifelike movements with a small number of motors and simple software control. Blossom»s exterior is also flexible in that it is made of fabric and not rigidly attached to the mechanism, enabling it to freely shift and deform. The handcrafted nature of the shell allows Blossom to take on a variety of forms and appearances. Blossom is envisioned as an open platform for user-crafted human-robot interaction.
An estimated 20 percent of the world»s population experience difficulties with physical, cognitive and mental health. Senior citizens experience many inconveniences due to less memory and mobility as a function of the aging process. To solve this, the cARe-bot system was designed to dynamically provide optimal information in the actual environment at any time or place. It is also designed to increase an elder person»s ability to live in a house or in senior homes with daily support, thus establishing the convenience and comfort they knew throughout their lives. The hardware of this system is a 360-degree controllable projection system. It is made with a pan-tilt system, projector, and a depth-recognition camera. Arduino is attached in order to control two sets of servo-motors, and is connected to a PC, which controls the entire process.
Service robots with social intelligence are starting to be integrated into our everyday lives. The robots are intended to help improve aspects of quality of life as well as improve efficiency. We are organizing an exciting workshop at HRI 2018 that is oriented towards sharing the ideas amongst participants with diverse backgrounds ranging from Human-Robot Interaction design, social intelligence, decision making, social psychology and aspects and robotic social skills. The purpose of this workshop is to explore how social robots can interact with humans socially and facilitate the integration of social robots. This workshop focuses on three social aspects of human-robot interaction: (1) technical implementation of social robots and products, (2) form, function and behavior, and (3) human behavior and expectations as a means to understand the social aspects of interacting with these robots and products. This workshop is supported by IEEE RAS Technical Committee on Robotic Hands, Grasping and Manipulation.
The increasing complexity of robotic systems are pressing the need for them to be transparent and trustworthy. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, people»s mental models of robotic systems stem from their interactions with living beings, which induces the risk of establishing incorrect or inadequate mental models of robotic systems and may lead people to either under- and over-trust these systems. We need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems that support the formation of correct mental models of and fosters trust calibration. This way, people will be better able to predict the intentions of these systems, and thus more accurately estimate their capabilities, better understand their actions, and potentially correct their errors. The aim of this full-day workshop is to provide a forum for researchers and practitioners to share and learn about recent research on people»s inferences of robot actions, as well as the implementation of transparent, predictable, and explainable behaviors into robotic systems.
As robots that share working and living environments with humans proliferate, human-robot teamwork (HRT) is becoming more relevant every day. By necessity, these HRT dynamics develop over time, as HRT can hardly happen only in the moment. What theories, algorithms, tools, computational models and design methodologies enable effective and safe longitudinal human-robot teaming? To address this question, we propose a half-day workshop on longitudinal human-robot teaming. This workshop seeks to bring together researchers from a wide array of disciplines with the focus of enabling humans and robots to better work together in real-life settings and over long-term. Sessions will consist of a mix of plenary talks by invited speakers and contributed papers/posters, and will encourage discussion and exchange of ideas amongst participants by having breakout groups and a panel discussion.
Robot-Assisted Therapy (RAT) has successfully been used in Human-Robot Interaction (HRI) research by including social robots in health-care interventions by virtue of their ability to engage human users in both social and emotional dimensions. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal of increasing the well-being of a vulnerable population. Typical, RAT is performed with the Wizard-of-Oz (WoZ) technique, where the robot is controlled, unbeknownst to the patient, by a human operator. However, WoZ has been demonstrated to not be a sustainable technique in the long-term. Providing the robots with autonomy (while remaining under the supervision of the therapist) has the potential to lighten the therapist»s burden, not only in the therapeutic session itself but also in longer-term diagnostic tasks. Therefore, there is a need for exploring several degrees of autonomy in social robots used in therapy. Increasing the autonomy of robots might also bring about a new set of challenges. In particular, there will be a need to answer new ethical questions regarding the use of robots with a vulnerable population, as well as a need to ensure ethically-compliant robot behaviors. Therefore, in this workshop we want to gather findings and explore which degree of autonomy might help to improve health-care interventions and how we can overcome the ethical challenges inherent to it.
As robotic technologies rapidly enter our everyday lives, we are compelled to consider the ethical, legal, and societal (ELS) challenges that arise in connection to these changes. In this workshop, we will present a novel methodological approach to HRI that will: help to identify ELS issues through ethnographic research methods, encourage interdisciplinary collaboration, and broaden the scope of existing HRI research while providing concrete tools for addressing these ELS challenges. We aim to introduce ethnographic methods and unfold the benefits and challenges of conducting ethnographic research. We will engage participants through speaker presentations, lightning talks, moderated group discussions, and a group-work session focused on integrating new methods into attendees» own research practices. Workshop topics will draw on the content of selected position papers, centered around how we can use ethnographic methods in HRI research so that we can: better understand users, workplaces, and robots; identify and address ELS issues; and ultimately ensure the design of more ethical, sustainable, and responsible robotics.
Today, off-the-shelf social robots are used increasingly in the HRI community to research social interactions with different target user groups across a range of domains (e.g. healthcare, education, retail and other public spaces).
We invite everyone doing HRI studies with end users, in the lab or in the wild, to collect past experiences of methods and practices that had issues or did not turn out as expected. This could include but is not limited to experimental setup, unplanned interactions, or simply the difficulty in transferring theory to the real world.
In order to be able to generalize and compare differences across multiple HRI domains and create common solutions, we are focusing in this workshop on experiences with often used off-the-shelf social robots.
We are interested in identifying the underlying causes of the unexpected HRI results, e.g. the contextual, task, and user related factors that influence interaction with a robot platform. We will furthermore discuss and document (ad hoc) solutions and/or lessons learned such that they can be shared with the HRI community.
As well as sharing specific case studies documenting real world HRI experiences, we further hope to inspire the continued sharing of open and insightful reflections within the HRI community.
The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. The full-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to design, develop and test new robotics systems that help learners. This 4th edition of the workshop will be dealing in particular on the potential use of robots for inclusive learning. Since the past few years, inclusive education have been a key policy in a number of countries, aiming to provide equal changes and common ground to all. In this workshop, we aim to discuss strategies to design robotics system able to adapt to the learners» abilities, to provide assistance and to demonstrate long-term learning effects.
Commercially available social robots are finally here. Previously accessible only to companies or wealthy individuals, affordable, mass-produced autonomous robot companions are poised to take the global market by storm in 2018. It is an exciting time for social roboticists, as some of the theories and techniques developed and tested for years under controlled conditions are finally released to the general public. However, the social robots available to the public differ significantly from those currently used in labs and field studies due to commercial requirements such as affordability, reliability, and ability to function despite environmental variability. This workshop focuses on the state of social robots in the market today---the lessons learned from mass-producing and distributing actual products, and the cutting-edge research that could be brought to bear on the many issues faced. Through presentations, panels, and hands-on interactions, participants from both academia and industry give each other feedback on what is working and what is not, and set goals for the near future.
Exercising is strongly recommended for prevention and treatment of pathologies with high prevalence such as cardiovascular diseases, cancer, and diabetes. The World Health Organization (WHO) states that insufficient physical activity is one of the leading risk factors for death worldwide. The decrease of physical activity in our society is not just an individual problem but it is influenced by a variety of environmental and social factors. Hence, it is important to target this issue from a multi-perspective and interdisciplinary point of view. This full-day workshop will offer a forum for researchers from a variety of backgrounds to discuss the potentials and limitations of using social robots to promote physical activity. Looking across dis- ciplinary boundaries we hope to establish a common understanding of the needs of potential target groups. We invite participants to share their experiences on the requirements and challenges imple- menting and deploying robot coaches that could motivate people to start and adhere to a physical activity program.
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) will bring together HRI, Robotics, Artificial Intelligence, and Mixed Reality researchers to identify challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of new augmented reality interfaces that mediate communication between humans and robots, comparisons of the capabilities and perceptions of robots and virtual agents, and best practices for the design of such interactions. VAM-HRI is the first workshop of its kind at an academic AI or Robotics conference, and is intended to serve as a timely call to arms to the academic community in response to the growing promise of this emerging field.
This workshop focuses on research in HRI using objective measures from social and cognitive neuroscience to provide guidelines for the design of robots well-tailored to the workings of the human brain. The aim is to present results from experimental studies in which human behavior and brain activity are measured during interactive protocols with robots. Discussion will focus on means to improve replicability and generalizability of experimental results in HRI.