Electronic computing was born in the form of massive machines in air-conditioned rooms, migrated to desktops and laptops, and lives today in tiny devices like watches and smartphones.
But why stop there, asks an international team of Stanford-led engineers. Why not build an entire computer onto a single chip? It could have processing circuits, memory storage and power supply to perform a given task, such as measuring moisture in a row of crops. Equipped with machine learning algorithms, the chip could make on-the-spot decisions such as when to water. And with wireless technology it could send and receive data over the internet.
Engineers call this vision of ubiquitous computing the Internet of Everything. But to achieve it they’ll need to develop a new class of chips to serve as its foundation.
The researchers will unveil the prototype for such a computer-on-a-chip Feb. 19 at the International Solid-State Circuits Conference in San Francisco. The prototype’s data processing and memory circuits uses less than a tenth as much electricity as any comparable electronic device, yet despite its size it is designed to perform many advanced computing feats.
“This is what engineers do,” said Subhasish Mitra, a professor of electrical engineering and of computer science who worked on the chip. “We create a whole that is greater than the sum of its parts.”
New memory is the key
The prototype is built around a new data storage technology called RRAM (resistive random access memory), which has features essential for this new class of chips: storage density to pack more data into less space than other forms of memory; energy efficiency that won’t overtax limited power supplies; and the ability to retain data when the chip hibernates, as it is designed to do as an energy-saving tactic.
RRAM has another essential advantage. Engineers can build RRAM directly atop a processing circuit to integrate data storage and computation into a single chip. Stanford researchers have pioneered this concept of uniting memory and processing into one chip because it’s faster and more energy efficient than passing data back and forth between separate chips as is the case today. The French team at CEA-LETI was responsible for grafting the RRAM onto a silicon processor.
In order to improve the storage capacity of RRAM, the Stanford group made a number of changes. One was to increase how much information each storage unit, called a cell, can hold. Memory devices typically consist of cells that can store either a zero or a one. The researchers devised a way to pack five values into each memory cell, rather than just the two standard options.
A second enhancement improved the endurance of RRAM. Think about data storage from a chip’s point of view: As data is continuously written to a chip’s memory cells, they can become exhausted, scrambling data and causing errors. The researchers developed an algorithm to prevent such exhaustion. They tested the endurance of their prototype and found that it should have a 10-year lifespan.
Mitra said the team’s computer scientists and electrical engineers worked together to integrate many software and hardware technologies on the prototype, which is currently about the diameter of a pencil eraser. Although that is too large for futuristic, Internet of Everything applications, even now the way that the prototype combines memory and processing could be incorporated into the chips found in smartphones and other mobile devices. Chip manufacturers are already showing interest in this new architecture, which was one of the goals of the Stanford-led team. Mitra said experience gained manufacturing one generation of chips fuels efforts to make the next iteration smaller, faster, cheaper and more capable.
“The SystemX Alliance has allowed a great collaboration between Stanford and CEA-LETI on edge AI application, covering circuit architecture, circuit design, down to advanced technologies,” said Emmanuel Sabonnadière, CEO of the French research institute.