Depcik, C. (2023) Adaptive Active Fusion of Camera and Single-Point LiDAR for Depth Estimation.


Tran, D., Ahlgren, N., Depcik, C., and He, H. (2023).  Adaptive Active Fusion of Camera and Single-Point LiDAR for Depth Estimation.  IEEE Transactions on Instrumentation & Measurement, 72, 3284129.

 

Abstract

Depth sensing is an important problem in many applications, such as autonomous driving, robotics, and automation. This article presents an adaptive active fusion method for scene depth estimation by using a camera and a single-point light detection and ranging (LiDAR) sensor. An active scanning mechanism is proposed to guide laser scanning based on critical visual and saliency features, and the convolutional spatial propagation network (CSPN) is designed to generate and refine full depth map from the sparse depth scans. The active scanning mechanism generates a depth mask by using log-spectrum saliency detection, Canny edge detection, and uniform sampling, which indicate critical regions that require a high resolution of laser scanning. To reconstruct a full depth map, the designed CSPN network extracts affinity matrices from the sparse depth scans, while reserving global spatial information in the images. The performance of proposed method was evaluated and compared with the state-of-the-art methods on the NYU depth dataset v2 (NYUv2) and the experiment demonstrated its outperformance in reconstruction accuracy and robustness to measurement noise. The proposed method was also evaluated in real-world scenarios.