Ophthalmology for robots could usher in video-rate, high-precision solid-state depth-sensing LiDAR

A team of scientists at Duke University have taken a technology originally developed for biological imaging, particularly of the eye, and applied it to LiDAR — considerably boosting the performance of the resulting depth-sensing system.

author avatar

19 Apr, 2022

Captured using a FMCW LiDAR system inspired by optical coherence tomographic imaging of living tissue, these depth maps show promise for future robotic vision systems.

Captured using a FMCW LiDAR system inspired by optical coherence tomographic imaging of living tissue, these depth maps show promise for future robotic vision systems.

An imaging technique originally developed for ophthalmology on living creatures could prove key to giving future robots and autonomous vehicles millimeter-scale depth perception — and at video-rate speeds suitable for real-time use.

Having spotted drawbacks with existing light detection and ranging (LiDAR) systems — primarily their low frame rates, high cost, and issues with the mechanical parts used in common frequency-modulated continuous wave (FMCW) LiDAR systems, scientists from Duke University’s Department of Biomedical Engineering teamed up with a colleague at the Department of Ophthalmology to try a different approach based on optical coherence tomography (OCT).

A robotic eye scan

“FMCW LiDAR shares the same working principle as OCT, which the biomedical engineering field has been developing since the early 1990s,” explains PhD student Ruobing Qian, first author of the paper. “But 30 years ago, nobody knew autonomous cars or robots would be a thing, so the technology focused on tissue imaging. Now, to make it useful for these other emerging fields, we need to trade in its extremely high resolution capabilities for more distance and speed.”

Effectively taking the core concept behind ultrasound imaging but replacing sound with light, OCT measures the phase shifting in light waves that have been bounced back by an object in their path compared to waves which traveled the same distance unobstructed. FMCW LiDAR, similarly, uses a constantly-shifting laser beam which is typically fired at a mechanical rotating mirror assembly to actively scan its surroundings.

Diagrams of the operation of a real-time time-frequency multiplexed FMCW LiDAR system, showing a schematic of the arm optical design with zoomed-in Zemax detail; spot diagrams of teh imaging plain at first, central, and last wavelengths plus two centroids; and a schematic of the overall system design.Combining techniques developed for long-range depth-sensing and 3D scanning of living tissue, the OCT-style system proposed by Duke researchers shows considerable performance improvements over its predecessors.

The problem with that approach is twofold. Firstly, moving parts have a tendency to break down and make for expensive and bulky apparatus. Secondly, and more importantly for many applications, the speed at which the LiDAR can scan is fundamentally limited by how quickly the mirror can move.

The OCT-style approach taken by the Duke team, which the researchers liken to stochastic optical reconstruction microscopy (STORM), uses a diffraction grating to spread a laser source — which is constantly shifting through its range of operating frequencies. The result is effectively an OCT imaging system which operates at a dramatically increased scale, while retaining its millimeter-resolution sensitivity and performing some 25 times faster than traditional FMCW LiDAR systems.

Video-rate imaging

Proven in prototype using a programmable swept laser source, a galvanometer mirror for vertical-axis scanning, and the transmissive diffraction grating for horizontal-axis scanning, the system the team developed performs impressively. The team claims sub-millimeter localization accuracy from just 200 spectral points, while being able to capture real-time 3D images of moving objects — including, in one test, an experimenter’s hand — at an overall acquisition rate of 7.6 frames per second.

This “video-rate” capture, the team explains, isn’t wholly down to the hardware used within the system; instead, a big contribution comes courtesy of a compressed time-frequency analysis approach for depth information retrieval. Applying an optimized window size and a zero-padding approach allows the imaging system to capture 238 depth measurements along the high-speed horizontal axis in a single sweep of the laser.

Colorful depth maps and 3D surface renderings of a pair of coffee mugs and a model of a human head, all captured using the team's OCT-style LiDAR system. Photographs of the actual objects are seen below, along with cross-section depth profiles.The researchers tested the imaging system out on a range of objects, from coffee cups to one of their own hands.

“3D imaging of multiple static samples and video-rate imaging of a moving human hand demonstrate the great potential of this technology in a wide range of potential applications in the fields of robotics navigation, virtual reality, and 3D printing,” the team proposes.

That’s not to say there isn’t work still to be done, of course. The Duke team admits to a range of limitations in its current work, including a maximum imaging depth of 32cm (around 12.6") — extendable, the team believes, through the use of a higher-speed digitizer and a higher-bandwidth photodetector, which would together push out the range to around 2m (around 6.6’).

Other limitations include a limited field of vision along the horizontal axis compared to traditional mirror-based FMCW LiDAR systems, the reliance on a moving mirror for scanning on the vertical axis, and a source of error traced in theory back to the laser’s sweep non-linearity.

An animation of a human hand opening and closing, based on depth data captured by the OCT-style LiDAR system.The imaging system shows high enough performance for video-rate data to be captured and analyzed in real-time.

Despite this, the team is clearly upbeat about its results. “In much the same way that electronic cameras have become ubiquitous, our vision is to develop a new generation of LiDAR-based 3D cameras which are fast and capable enough to enable integration of 3D vision into all sorts of products," says Joseph Izatt, the Michael J. Fitzpatrick Distinguished Professor of Biomedical Engineering and senior author of the paper. “The world around us is 3D, so if we want robots and other automated systems to interact with us naturally and safely, they need to be able to see us as well as we can see them.”

The team’s work has been published in the journal Nature Communications under open-access terms. Additionally, raw interferogram data have been uploaded to figshare while MATLAB source code for the depth map computation system has been published to GitHub under an unspecified license.

References

Ruobing Qian, Kevin C. Zhou, Jingkai Zhang, Christian Viehland, Al-Hafeez Dhalla, and Joseph A. Izatt: Video-rate high-precision time-frequency multiplexed 3D coherent ranging, Nature Communications Vol, 13 Iss. 1476. DOI 10.1038/s41467-022-29177-9.

David Huang, Eric A. Swanson, Charles P. Lin, Joel S. Schuman, William G. Stinson, Warren Chang, Michael R. Hee, Thomas Flotte, Kenton Gregory, Carmen A. Puliafito, and James G. Fujimoto: Optical Coherence Tomography, Science, Vol. 243 Iss. 5035. DOI 10.1126/science.1957169.

19 Apr, 2022

A freelance technology and science journalist and author of best-selling books on the Raspberry Pi, MicroPython, and the BBC micro:bit, Gareth is a passionate technologist with a love for both the cutting edge and more vintage topics.

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.