Developed at Google Research, HyperTransformer decouples the task space and individual task complexity to generate all model weights in just one pass — while also offering support for unlabeled sample ingestion.
Developed at Google Research, HyperTransformer decouples the task space and individual task complexity to generate all model weights in just one pass — while also offering support for unlabeled sample ingestion.
Hardware and software engineers will be the lifeblood of tomorrow's connected world. Academia is working hard to ensure a steady supply, in part by adapting engineering education to train the next generation of IoT innovators.
Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making
"MechStyle" allows users to personalize 3D models, while ensuring they're physically viable after fabrication, producing unique personal items and assistive technology.
A simulation of an individual patient's brain tumors, kept up to date with readily available data, can identify whether dietary treatments and drugs are likely to work.
Hey Tuya showcases how agentic Physical AI evolves assistants from isolated commands into systems that learn context, coordinate devices, and act reliably across real-world environments.
This episode celebrates five years of The Next Byte with a recap of 2025, highlighting standout episodes through The "Saucies" Awards, reflecting on major trends, sharing predictions for 2026, and thanking the global listener community.
Matroid builds no-code computer-vision detectors that can spot everything from microscopic material defects to real-time safety hazards on a factory floor.
In large-scale warehousing and distribution operations, conveyor belts are an essential infrastructure that must operate with near-zero downtime to ensure the timely delivery of products. The presence of loose or foreign items on a conveyor belt can pose a serious risk to these operations.
In this post, we'll walk through how to evaluate that progress using the same metrics our platform provides automatically, so you can build detectors that get smarter, sharper, and more reliable over time.
Developed at Google Research, HyperTransformer decouples the task space and individual task complexity to generate all model weights in just one pass — while also offering support for unlabeled sample ingestion.
Hardware and software engineers will be the lifeblood of tomorrow's connected world. Academia is working hard to ensure a steady supply, in part by adapting engineering education to train the next generation of IoT innovators.
Designed to dramatically reduce the amount of training data needed for an image recognition system, this one-shot approach "inspired by nativism" takes a leaf from humans' ability to intuit and abstract.
Advances in software allow a customized car to perform controlled, autonomous drifting to enhance active safety and to give drivers the skills of professional racers.
In this article, we look at two tinyML projects for education. We show how Backyard Brains uses low-cost experiment kits to make neuroscience education more accessible. We also introduce our readers to a specialisation offered by Harvard & Google to help students learn tinyML like never before.
Considered obsolete since the introduction of vision transformers, ConvNeXt proves there's life in convolution yet — outperforming its rivals by adopting some of their own tricks.
Soon, internet users will be able to meet each other in cyberspace as animated 3D avatars. Researchers at ETH Zurich have developed new algorithms for creating virtual humans much more easily.
MotionCam-3D by Photoneo enables 3D scanning & handling of objects that are moving on an overhead conveyor without interruption. The camera provides high-quality 3D data even while the objects are moving, swinging, or slightly rotating.
ETH researchers led by Marco Hutter developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time.