Collecting data of human/robot interaction that matches human habits, social rules and expectations/reactions in front of an autonomous robot might seem tricky while autonomous social robots are still being developed...
At GIPSA-lab, the CRISSP team has built a platform to collect such face-to-face interaction data (speech, gaze, head movements, blink rate, as well as audio and video of the environment) for the Nina robot (iCub with jaw and lips). We use immersive teleoperation, with a human pilot being the "insider" that gifts the machine with its cognitive skills and social behavior. Thanks to a VR helmet that embeds two eye-trackers, the pilot senses the world through the eyes, ears and active head movements as if it was its own body. He can keep his human/social strategies and behaviors, or adapt them to cope with this new body (hearing motor noise, VGA resolution video, time to move, blurred vision) while directing its (robotic) head and gaze at will, benefiting from the eye vergence capability of iCub.
In this video, part of the SOMBRERO project, the pilot (seen bottom right) "demonstrates" to the robot how to conduct a memory test.
コメント