|Head||1||Range finding sensor|
|3||Range finder sensors|
|Obstacle passing||1,5||cm step|
The robot is designed for human-robot interaction. Its navigation components allow it to move around. The sensors of the robot (Time Of Flight, Ultrasound, 3D camera, groundsensors) enhance its obstacles and cliff avoidance capability. This gives it the capacity to safely look for a user and react to human presence and assess its engagement.
The robot's cameras give it the ability to detect and recognize shapes like the human face and skeleton, allowing human detection and tracking. The microphones, the touchscreen and the caress sensors allow the robot to react to the actions of a user. The robot can listen when the user calls for it, and reacts to the vocal command. The robot is able to express an emotional reaction when interacting with a user.
The sensors and actuators detect, act and interact in a human-centric environment. It maintains its own internal state composed ofdesires and emotions. The robot makes decisions based on stimuli linked to the robot's environment and previous interactions. This internal state influences the robot's behavior, both on the actions itmay take and the way it fulfills them. Buddy use its face, and its actuators (color LED, wheels motors, headmotors) to display emotions. It also plays sound and uses a speech synthesis engine.
The robot's SDK gives access to all the components of Buddy and provides high level functions,
like autonomous navigation, emotional behavior display, GUI management, and dialogue management.
Describes the robot, its features and applications.
Describes the robot, its interactive components, behavior & internal state, and the software.
Describes the robot, its features and applications. Describes the team, its journey to realization, and the previous crowdfunding campaign.