Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 818 Bytes

README.md

File metadata and controls

8 lines (5 loc) · 818 Bytes

Close Encounters

Expressing Intention and Perception through Gaze

This is a module for self driving robots (AGVs) to communicate navigational decisions with people around it. This is done through a screen with eyes that look toward the travel direction of the robot. Using a camera, the robot can also make eye contact with persons to make it clear that the robot has seen them and will avoid colliding with the person.

We often rely on eye contact and gaze direction to predict where others are heading or what to expect from them. As especially functional robots rarely have such a clear focal point for interaction, people tend to find them unpredictable, with negative consequences for the implementation and ultimately the effectiveness of the human-robot collaboration.

Eyes demo