Security analysis of camera-LiDAR perception for autonomous vehicle system

A novel class of LiDAR spoofing attacks on autonomous vehicles– the frustum attack, was then later validated using an existing hardware setup.

author avatar

30 Mar, 2022

The novel class frustum attack leverages that the camera is only a 2D projection of 3D space [Image Source: Research Paper]

The novel class frustum attack leverages that the camera is only a 2D projection of 3D space [Image Source: Research Paper]

As the foundation for reliable decision-making, the perception algorithm has been the backbone for the development of autonomous vehicles. To enable this, autonomous vehicles feed sensor data to perception algorithms that understand the environment wherein sensor fusion with multi-frame tracking is gaining momentum for the detection of 3D objects. The camera-LiDAR fusion has shown significant performance upgrades on workloads associated with a 3D vision– LiDAR provides an accurate 3D geometry structure, while the camera captures more scene context and semantic information [1]. The fusion of these two sensors has become the fundamental idea to achieve better performance but it is necessary to analyze the attacks on these systems. 

The security analysis of the perception focused on the image domain with LiDAR-only that introduced spoofing attacks against LiDAR. But a team of researchers identified limitations in the existing security analyses of LiDAR-based perception models that become complex with multi-sensor perception and multi-frame tracking architecture. These approaches required complex models to be deployed with white-box optimizations, restricting the research to no analysis of the black box. Researchers from Duke University and the University of Michigan carried out research to perform an analysis of the camera-LiDAR fusion system under black-box LiDAR spoofing attacks [2]. Additionally, the team defines a novel, context-aware attack, frustum attack, that shows a significant vulnerability to the widely used architecture of LiDAR-only and camera-LiDAR fusion systems. 

“Our goal is to understand the limitations of existing systems so that we can protect against attacks,” said Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering at Duke. “This research shows how adding just a few data points in the 3D point cloud ahead or behind of where an object actually, is can confuse these systems into making dangerous decisions.”

Attacks on camera-LiDAR fusion systems

In the paper, “Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles” the team also introduced a new class of perception attacks for which the attacker only needs to know the location of the true object in the environment. They define five scenarios where these attacks can be used to launch spoofing attacks relative to existing objects in the scene. This work is an extension of the work carried out by Y. Cao, C. Xiao, et. al [3], and J. Sun, Y. Cao, et. al [4] that focused on isolated placement in the 5-8 meter range.

Demonstration of Frustum AttackThe attacker launches a malicious frustum attack against a victim AV using a target car [Left]; Physical experiment demonstrates that an attacker can stably spoof longitudinally consistent points in the frustum of a target vehicle. [Right]  


For consideration that aligns with the existing work, the team defines the attack goals as a false positive outcome, a false-negative outcome as well as translation attack outcome where the detected object’s bounding box is translated by some distance. As the team explains in the blogpost on the Duke University website, “the new attack strategy works by shooting a laser gun into a car’s LIDAR sensor to add false data points to its perception.” If these data points are out of range for what car’s vision system, then the new research work shows that 3D LiDAR data points placed within an area of the camera’s 2D field of view (FOV) can fool the system to produce errors.

The above-defined security vulnerability in the camera system is seen as a 3D pyramid with the tip sliced off. The area covered in front of the camera’s lens is in the shape of a frustum from where the name comes. Moreover, in the case of the camera placed on top of the camera facing the front, there will be a few data points placed in front of or behind another nearby car that can shift the entire system’s perception by several meters

“This so-called frustum attack can fool adaptive cruise control into thinking a vehicle is slowing down or speeding up,” Pajic said. “And by the time the system can figure out there’s an issue, there will be no way to avoid hitting the car without aggressive maneuvers that could create even more problems.”

Evaluation of the novel– frustum attack

Researchers evaluated the frustum attack against the state-of-the-art defenses proposed in the existing research work on LiDAR spoofing using various perception algorithms within three distinct LiDAR-only and three distinct camera-LiDAR fusion architectures. These architectures include cascaded-semantic, feature-level, and tracking levels on more than 75 million attack scenarios, making it a rigorous evaluation methodology. The team claims this analysis to be the largest of all on LiDAR spoofing that extensively evaluates multiple architectures of multi-sensor fusion for perception. 

Evaluation of Frustum AttackFrustum attack was evaluated on an autonomous vehicle running Baidu Apollo software using perception data from LGSVL simulator. [Image Source: Research Paper] 


To evaluate the impact of LiDAR attacks on autonomous vehicles equipped with multi-frame tracking, the research work proposed a frustum attack case study using longitudinal sequences of perception data. Start by analyzing the multi-frame fusion and tracking data using representative algorithms and then testing the frustum attack on Baidu Apollo using the LGSVL simulator. “The case studies illuminate the high-impact adversarial situations that endanger vehicle and passenger safety that occur under the frustum attack when attacking over multiple time points, effectively deceiving the host vehicle’s tracking and control,” the researchers note.

The results of the paper will be presented on August 10-12 at the 2022 USENIX Security Symposium. More details on the thorough analysis of LiDAR-only and camera-LiDAR perceptions along with the frustum attack case study are available in the research work published on Cornell University’s research sharing platform, arXiv under open access terms.

References

[1]  H. Zhong, H. Wang, Z. Wu, C. Zhang, Y. Zheng, and T. Tang, “A survey of LiDAR and camera fusion enhancement,” Procedia Computer Science, vol. 183, pp. 579–588, Jan. 2021, doi: 10.1016/j.procs.2021.02.100.

[2] R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic: Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles. DOI arXiv: 2106.07098 [cs.CR].

30 Mar, 2022

Abhishek Jadhav is an engineering student, RISC-V ambassador and a freelance technology and science writer with bylines at EdgeIR, Electromaker, Embedded Computing Design, Electronics-Lab, Hackster, and Electronics-Lab.

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.