ICSR 2018 WORKSHOP ON SOCIAL HUMAN-ROBOT
INTERACTION OF SERVICE ROBOTS
November 28, 2018
The 2018 International Conference on Social Robotics (ICSR 2018) invites papers for a Workshop on Social Human-Robot Interaction of Service Robots.
Service robots with social intelligence are coming to the human world, and they will help us to make our lives better. We are organizing an exciting workshop at ICSR that is oriented towards sharing the ideas of participants with diverse backgrounds ranging from robotics, machine learning, computer vision, social psychology, and Human-Robot Interaction design. The purpose of this workshop is to explore how social robots can interact with humans socially and facilitate the integration of social robots.
This workshop will focus on the current advances in the area of social Human-Robot Interaction, social intelligence, social skills, and their applications including clinical evaluations. Papers are solicited on all areas directly related to these topics, including but not limited to:
- Social perception and context awareness
- Short/long-term behaviour recognition
- Social expression and interactive behavior
- Social task modelling and management
- Social grasping and navigation skills
- Social robot design
- Human-robot interaction design
- Emotion recognition and model design
- Dialogue based interaction
- User evaluation
- Applications such as healthcare, receptionist, education
Prospective authors are invited to submit short papers (2 pages) with the ICSR2018 Workshop on Social Human-Robot Interaction of Human-care Service Robots format by the paper submission due, and the slide file (ppt or pdf) by the slide submission due. You can download the workshop format from here: [ICSR_Workshop_Form]
Please submit your paper and slide to Dr. Ho Seok Ahn (email@example.com) with this title format: [ICSR2018 Workshop] Author_Title
- Paper submission:
September 30, 2018==> October 30, 2018
- Notification of acceptance:
October 30, 2018==> November 15, 2018
- Slide (ppt or pdf) submission:
November 20, 2018==> November 22, 2018 (please re-submit your slide on the workshop day if you update your slide after the submission)
- Workshop: November 28, 2018
- Ho Seok Ahn (The University of Auckland, New Zealand)
- Minsu Jang (ETRI, Korea)
- Jongsuk Choi (KIST, Korea)
- Sonya S. Kwak (KAIST, Korea)
- Chung-Hyuk Park (George Washington University, USA)
- Yoonseob Lim (KIST, Korea)
- Takayuki Kanda (Kyoto University/ATR, Japan)
- Amit Kumar Pandey (SoftBank Robotics)
- Franziska Kirstein (Blue Ocean Robotics)
- David Sirkin (Stanford university, USA) – Teleconference
- Hae Won Park (MIT, USA) – Teleconference
- Anastasia K. Ostrowski, Nikhita Singh, Hae Won Park, Cynthia Breazeal, “Preferences, Patterns, and Wishes for Agents in the Home” (MIT) [Teaser]
AI systems in the form of voice-activated agents, such as Amazon Echo and Google Home, are becoming increasingly prevalent in home environments. While these are becoming more prevalent in homes, there is little understanding of people’s preferences, desires, and boundaries for AI systems in the home. A design toolkit and long-term home experience with Amazon Echo Dot and Jibo described in this paper provided an opportunity to explore 69 intergenerational users’ preferences, usage patterns, and design wishes for future AI systems in the home.
- Kimmo Vänni, John-John Cabibihan, “Economic Evaluation model for the Purchase of a Social Robot” (Qatar University) [Teaser]
Frameworks and methods for evaluating the economic benefits of social robots are still scarce. Public service organizations such as hospitals may need may need evaluation models for convincing the policy makers that the purchase of social robots is able to cut productivity loss and improve quality of work. Aim of this study was to present the monetary value of productivity loss regarding one hospital, and to discuss how much a hospital could invest in robots. A hospital, which employs about 500 nurses, can invest annually about €500.000 if robots are able to compensate a half of productivity loss due to employees’ poor performance.
- Deborah Johanson, Ho Seok Ahn, Bruce MacDonald, Elizabeth Broadbent, “Investigating the Use of Attentional Behaviours by a Social Robotic Receptionist: Effects on User Perceptions” (University of Auckland) [Teaser]
Developing social robot behaviours is critically important if robots are to be successfully employed in healthcare environments. The ability of a robot to display, engage, and maintain attention, is fundamental to ensure that interactions with humans are appropriate and comfortable. The aim of this research was to examine the effect of robot voice pitch, robot self-disclosure, and robot forward lean on human attention, engagement, perceived robot empathy, and perceived robot attention. A randomised, between subjects, experimental design was employed. Preliminary results will be presented at this workshop.
- Woo-Ri Ko, Youngwoo Yoon, Minsu Jang, Jaeyeon Lee, Jaehong Kim, “End-to-End Learning-based Interaction Behavior Generation for Social Robots” (ETRI) [Teaser]
This paper proposes an interaction behavior generation method for social robots using end-to-end learning. The next joint angles of a social robot are predicted by the long short-term memory (LSTM)-based interaction behavior generator considering the previous joint angles and the robot’s point-of-view human skeletons. The interaction behavior generator is trained by using NTU action recognition dataset. To show feasibility of the proposed method, a humanoid robot, “NAO,” is trained to generate two interaction behaviors, i.e. handshaking and staying still, in Choreographe simulator.
- Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, “End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots” (ETRI) [Teaser]
Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. We also demonstrate a co-speech gesture with a NAO robot working in real time.
- Minsu Jang, Do Hyung Kim, Jaeyeon Lee, Jaehong Kim, “Building Datasets for Training Robots’ Social Intelligence” (ETRI) [Teaser] Project AIR(Artificial Intelligence for Robots) aims to build software modules that can mimic human social intelligence based on machine learning. In this paper, we provide specifications of various datasets we identified as being valuable for training social intelligence. Datasets for lip-reading, speech and affect recognition, human tracking, action recognition, gesture generation are being collected in the labs, test beds and living labs in the domain of elderly people’s life care. We describe details of the datasets and how they can be used. We believe these datasets will contribute to the advancement of machine learning techniques for developing social robots and social intelligence.
- Dahyun Kang, JongSuk Choi, “sHRI Strategies to Attract Children’s Attention toward a Robot Based on Their Extroversion” (KIST) [Teaser]
This study investigated what type of robots can attract children’s attention effectively based on their personality. We recruited five children, and examined the degrees of children’s extroversion using Big Five Inventory to find out the robot’s appropriate interaction type according to the personalities of the children in the clothes store. The children interacted with a proactive clerk robot and a reactive clerk robot in a random order when they picked out some clothes. As a result, the frequency of the interaction between children and a robot was different depending on their extroversion. Participants with relatively more extroverted tendency were more actively interacting with a proactive robot than a reactive robot. On the other hand, participants with relatively less extroverted tendency interacted more positively with a reactive robot than a proactive robot, even though the reactive robot did not try to attract their attention.
- Chaewon Park, Jongsuk Choi, Jee Eun Sung, Yoonseob Lim, “Toward understanding linguistic behaviors of human depending on task performance of voice assistant” (KIST) [Teaser]
Human has pragmatics ability to alter utterance depending on shared knowledge with conversation partner. In this study, we designed an experiment to explore the linguistic adaptation of human depending on the level of voice assistant’s task performance. Through the proposed experiment, we are going to test whether or not people would show change in linguistic behavior, linguistic complexity, and required pragmatics ability.
- Insik Yu, Jong-uk Lee, Byung-gi Choi, Jaeho Lee, “Work in progress: Social Service Template for Social Robot Service“ (University of Seoul) [Teaser]
In this paper, we present a template for describing the social services of social robots. We use Gaia Methodology to describe social services in which multi actors participate. Service templates consist of the roles required to perform a service and the interactions between the roles. This makes it possible to describe the service in which the multi actor participates.
- Gun-Hee Cho, Su-Phil Cho, Yong-Suk Choi, “Dialog Generation System based on Social Knowledge for a Service Robot” (Hanyang University) [Teaser]
Dialog generation system of a service robot needs knowledge-driven techniques that store dialog contexts. In this case, dialog generation system can generate a sentence that matches the situation and the flow of conversation. Thus, we developed a knowledge-based dialogue generating system and confirmed that this system successfully generates a variety of conversations to perform the service.
8:30 – 8:40 Registration & Greeting
8:40 – 8:45 Opening (Minsu Jang)
8:45 – 9:35 Invited Talk I: Remote-presentation
9:45 – 11:00 Invited Talk II
11:00 – 11:20 Break
11:20 – 11:50 Teaser Session: 3 mins talk
11:50 – 12:50 Poster Session
12:50 – 13:00 Closing (Ho Seok Ahn)
Invited Talk I: Remote-presentation (Chair: Minsu Jang)
8:45 – 9:10 David Sirkin (Stanford university, USA) – remote presentation
9:10 – 9:35 Hae Won Park (MIT, USA) – remote presentation
Invited Talk II (Chair: Minsu Jang)
9:45 – 10:10 Takayuki Kanda (Kyoto University/ATR, Japan)
10:10 – 10:35 Amit Kumar Pandey (SoftBank Robotics)
10:35 – 11:00 Franziska Kirstein (Blue Ocean Robotics)
Technical Session (Chairs: Ho Seok Ahn)
9:35 – 9:45 Anastasia Ostrowski – remote presentation
11:20 – 11:50 Teaser session
11:50 – 12:50 Poster session
- November 28-30, 2018, Qingdao, China
- Ho Seok Ahn (The University of Auckland, hs.ahn[at]auckland.ac.nz)
- Minsu Jang (ETRI, minsu[at]etri.re.kr)