NPUs are integrated units that excel in real-time AI tasks on edge devices like smartphones and IoT systems with low power consumption. TPUs are standalone processors designed for large-scale AI workloads in data centers, delivering exceptional performance in deep learning tasks.
Professor Anna Erickson highlights the reopening of Three Mile Island Unit 1 as a crucial step in meeting the growing energy demands of AI data centers with carbon-free nuclear power, aligning with Microsoft's sustainability goals.
Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making
EPFL roboticists have shown that when a modular robot shares power, sensing, and communication resources among its individual units, it is significantly more resistant to failure than traditional robotic systems, where the breakdown of one element often means a loss of functionality.
MIT researchers' DiffSyn model offers recipes for synthesizing new materials, enabling faster experimentation and a shorter journey from hypothesis to use.
Matroid builds no-code computer-vision detectors that can spot everything from microscopic material defects to real-time safety hazards on a factory floor.
In large-scale warehousing and distribution operations, conveyor belts are an essential infrastructure that must operate with near-zero downtime to ensure the timely delivery of products. The presence of loose or foreign items on a conveyor belt can pose a serious risk to these operations.
In this post, we'll walk through how to evaluate that progress using the same metrics our platform provides automatically, so you can build detectors that get smarter, sharper, and more reliable over time.
NPUs are integrated units that excel in real-time AI tasks on edge devices like smartphones and IoT systems with low power consumption. TPUs are standalone processors designed for large-scale AI workloads in data centers, delivering exceptional performance in deep learning tasks.
Professor Anna Erickson highlights the reopening of Three Mile Island Unit 1 as a crucial step in meeting the growing energy demands of AI data centers with carbon-free nuclear power, aligning with Microsoft's sustainability goals.
The use of generative artificial intelligence in protein design stands to revolutionize new drug development. EPFL ambitions putting together a consortium to further explore this avenue.
In an interview from CES 2025, Vikram Gupta, Chief Product Officer at Synaptics, discusses the company's strategic collaboration with Google to enhance Edge AI processors, emphasizing the importance of context-aware computing and AI-native platforms for improving user experience and efficiency.
Infineon’s Power System Reliability Modeling enhances power reliability in data centers by enabling real-time power supply monitoring, predictive maintenance, and lifetime estimation from component to system level.
Learn how agnostic systems like Awentia's No-Data Vision Foundation Model addresses key barriers to AI adoption such as data dependency, cost, and complexity across industries like agriculture, robotics and manufacturing.
This article is a detailed analysis of In-Memory Compute technology, covering its architecture, use cases, recent advancements, and practical implementation strategies to enhance computational efficiency.
For this article we interviewed Niwa and Omura, who are responsible for the design, development, and operation of the system, as well as Toda, who requested its development and is also a user.
Generative AI has evolved to become an advanced, general-purpose technology that has reached the level of practical use, and the usage scenarios have become more familiar applications. Nowadays, the application of AI including generative AI has even expanded into the field of sports.
Specialized microchips that manage signals at the cutting edge of wireless technology are astounding works of miniaturization and engineering. They’re also difficult and expensive to design.
EPFL researchers have developed 4M, a next-generation, open-sourced framework for training versatile and scalable multimodal foundation models that go beyond language.