Research from MIT suggests that a certain type of robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision.
This article explores TPU vs GPU differences in architecture, performance, energy efficiency, cost, and practical implementation, helping engineers and designers choose the right accelerator for AI workloads today!
EPFL roboticists have shown that when a modular robot shares power, sensing, and communication resources among its individual units, it is significantly more resistant to failure than traditional robotic systems, where the breakdown of one element often means a loss of functionality.
MIT researchers' DiffSyn model offers recipes for synthesizing new materials, enabling faster experimentation and a shorter journey from hypothesis to use.
Research from MIT suggests that a certain type of robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision.
This 3D vision system for a mobile robot picker consists of an AMR with a robotic arm equipped with the Photoneo 3D camera MotionCam-3D. The hand-eye setup allows you to scan and approach each object from an appropriate angle, distance, and position.
Designed as a necklace-style wearable in a 3D-printed case, SpeeChin is capable of recognizing 54 English and 44 Chinese voice commands — even if the wearer never says anything out loud.
The in-sensor adaptation strategy widens the range for image perception under different illumination conditions to simplify the complexity of hardware and algorithms.
Sounds provide important information about how well a machine is running. ETH researchers have now developed a new machine learning method that automatically detects whether a machine is "healthy" or requires maintenance.
Designed to address the OOD data false prediction by adaptively synthesizing virtual outliers that can maintain the model’s decision boundary during training.
Designed to address the exploding computational demands of deep neural networks, physical neural networks (PNNs) branch out from electronics into optics and even mechanics to boost performance and efficiency.
Designed for use in the human body for everything from drug delivery to less-invasive biopsies, these tiny microrobots operate under the control of a deep-learning system trained with no modelling or prior environmental knowledge.
Taking its cues from the haunting electronic instrument, the MoCapaci project sews a theremin into a blazer to feed a deep-learning system with data for accurate gesture sensing and activity recognition.
Over the last decade, deep neural networks have emerged as the solution to several AI complex applications from speech recognition and object detection to autonomous vehicular systems.
Designed to address the risk to front-line staff from COVID-19, this autonomous swab-sampling robot is designed to take the human element out of sample gathering.