Home > Technology peripherals > AI > One article to understand lidar and visual fusion perception of autonomous driving

One article to understand lidar and visual fusion perception of autonomous driving

WBOY
Release: 2023-06-16 12:11:45
forward
1852 people have browsed it

2022 is the window period for intelligent driving to move from L2 to L3/L4. More and more automobile manufacturers have begun to deploy higher-level intelligent driving mass production, and the era of automobile intelligence has quietly arrived.

With the technical improvement of lidar hardware, car-grade mass production and cost reduction, high-level intelligent driving functions have promoted the mass production of lidar in the field of passenger cars. A number of models equipped with lidar will be delivered this year, and 2022 is also known as "the first year of lidar on the road."

01 Lidar sensor vs image sensor

Lidar is a sensor used to accurately obtain the three-dimensional position of an object. It is essentially a laser Detection and ranging. With its excellent performance in target contour measurement and universal obstacle detection, it is becoming the core configuration of L4 autonomous driving.

However, the range measurement range of lidar (generally around 200 meters, and the mass production models of different manufacturers have different indicators) results in a perception range that is much smaller than that of image sensors.

And because its angular resolution (generally 0.1° or 0.2°) is relatively small, the resolution of the point cloud is much smaller than that of the image sensor. When sensing at a long distance, it is projected to the target object. The points on the image may be so sparse that they cannot even be imaged. For point cloud target detection, the effective point cloud distance that the algorithm can really use is only about 100 meters.

Image sensors can acquire complex surrounding information at high frame rates and high resolutions, and are cheap. Multiple sensors with different FOV and resolutions can be deployed for different distances and ranges. visual perception, the resolution can reach 2K-4K.

However, the image sensor is a passive sensor with insufficient depth perception and poor ranging accuracy. Especially in harsh environments, the difficulty of completing sensing tasks will increase significantly.

In the face of strong light, low illumination at night, rain, snow, fog and other weather and light environments, intelligent driving has high requirements on sensor algorithms. Although lidar is not sensitive to the influence of ambient light, the distance measurement will be greatly affected by waterlogged roads, glass walls, etc.

It can be seen that lidar and image sensors each have their own advantages and disadvantages. Most high-level intelligent driving passenger cars choose to integrate different sensors to complement each other's advantages and integrate redundancy.

Such a fused sensing solution has also become one of the key technologies for high-level autonomous driving.

02 Point cloud and image fusion perception based on deep learning

The fusion of point cloud and image belongs to Multi-Sensor Fusion ,MSF) technology field, there are traditional random methods and deep learning methods, which are mainly divided into three levels according to the abstraction level of information processing in the fusion system:

Data layer fusion (Early Fusion)

First fuse the sensor observation data, and then extract features from the fused data for identification. In 3D target detection, PointPainting (CVPR20) adopts this method. The PointPainting method first performs semantic segmentation on the image, maps the segmented features to the point cloud through a point-to-image pixel matrix, and then "draws the point" The point cloud is sent to the 3D point cloud detector to perform regression on the target Box.

One article to understand lidar and visual fusion perception of autonomous driving

##Feature layer fusion (Deep Fusion)

First extract the natural data features from the observation data provided by each sensor, and then fuse these features for identification. In the fusion method based on deep learning, this method uses feature extractors for both the point cloud and the image branch. The networks of the image branch and the point cloud branch are fused semantically level by level in the forward feedback level to achieve multi-scale information. semantic fusion.

The feature layer fusion method based on deep learning has high requirements for spatiotemporal synchronization between multiple sensors. Once the synchronization is not good, it will directly affect the effect of feature fusion. At the same time, due to differences in scale and viewing angle, it is difficult for the feature fusion of LiDAR and images to achieve the effect of 1 1>2.

One article to understand lidar and visual fusion perception of autonomous driving

Decision-making layer fusion (Late Fusion)

Compared with the first two, it is the least complex fusion method. It does not fuse at the data layer or feature layer, but is a target-level fusion. Different sensor network structures do not affect each other and can be trained and combined independently.

Since the two types of sensors and detectors fused at the decision-making layer are independent of each other, once a sensor fails, sensor redundancy processing can still be performed, and the engineering robustness is better.

One article to understand lidar and visual fusion perception of autonomous driving

With the continuous iteration of lidar and visual fusion perception technology, as well as the continuous accumulation of knowledge scenarios and cases, it will More and more full-stack converged computing solutions are emerging to bring a safer and more reliable future for autonomous driving.

The above is the detailed content of One article to understand lidar and visual fusion perception of autonomous driving. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template