|power||Rechargeable 51.8V Lithium Ion Battery|
ASIMO (Advanced Step in Innovative Mobility), has the ability to recognize moving objects, postures, gestures, the surrounding environment, and sounds and faces, which enables it to interact with humans. The robot can detect the movements of multiple objects by using visual information captured by two camera "eyes" in its head and also determine distance and direction. This feature allows ASIMO to follow or face a person when approached.
The robot also interprets voice commands and human gestures, enabling it to recognize when a handshake is offered or when a person waves or points, and then respond accordingly. ASIMO's ability to distinguish between voices and other sounds allows it to identify its companions.
ASIMO is able to respond to its name and recognize sounds associated with a falling object or collisions, which allows the robot to face a person when spoken to or look towards a sound. ASIMO responds to questions by nodding or providing a verbal answer in different languages and can recognize approximately 10 different faces and address them by name.
Discusses the history of the humanoid, design choices, human interaction abilities, and an assortment of technical specifications.
Describes the planning algorithms and environmental models used to develop the desired walking method within the ASIMO robot. Also includes tests with obsticals in the humanoid's path and future work which includes incorporating visual sensors into the path planning sequence.
Explains the structure of the root system for intelligence, integrated subsystems on its body, and their new functions. Also looks at the robot's speech system and sound localization features and gesture recognition aspects.
Unitree launched its first humanoid robot H1 on August 15. Showcasing six months of hard work, the H1 humanoid stands about 71 inches (1800mm) tall, but only weighs about 100 lbs (47kg) and boasts unmatched power.
Columbia Engineering researchers use AI to teach robots to make appropriate reactive human facial expressions, an ability that could build trust between humans and their robotic co-workers and care-givers