TY - GEN
T1 - Tracking and fusion for multiparty interaction with a virtual character and a social robot
AU - Yumak, Zerrin
AU - Ren, Jianfeng
AU - Thalmann, Nadia Magnenat
AU - Yuan, Junsong
PY - 2014/11/24
Y1 - 2014/11/24
N2 - To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.
AB - To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.
KW - Interactive virtual human
KW - Multimodal fusion
KW - Multiparty interaction
KW - Social robot
UR - http://www.scopus.com/inward/record.url?scp=84919360562&partnerID=8YFLogxK
U2 - 10.1145/2668956.2668958
DO - 10.1145/2668956.2668958
M3 - Conference contribution
AN - SCOPUS:84919360562
T3 - SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence, SA 2014
BT - SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence, SA 2014
PB - Association for Computing Machinery
T2 - SIGGRAPH Asia 2014 Workshop on Autonomous Virtual Humans and Social Robot for Telepresence, SA 2014
Y2 - 3 December 2014 through 6 December 2014
ER -