January 28-30, 2019 | Hyatt Regency | Denver, CO
Taking place with

Pre-processing of Dense but Noisy LiDAR Point Clouds: Impact on Information Extraction in High-precision LiDAR Surveying

07 Feb 2018
8:55 am - 9:20 am
Centennial A-C

Pre-processing of Dense but Noisy LiDAR Point Clouds: Impact on Information Extraction in High-precision LiDAR Surveying

Traditional LiDAR surveying provides precise information on surveyed objects by measuring sequentially ranges to the objects through laser ranging & simultaneously measuring position & orientation of laser beams in 3D space with high accuracy. An intermediate data product within LIDAR surveying is a georeferenced point cloud with point densities ranging typically from 4-40 points per square meter.

Every point of the point cloud stems from interaction of a laser pulse with an object & can be described by its coordinates, its attributes such as target reflectance & its accuracy in all dimensions which is derived from the accuracy of the LiDAR system in the given environment. However, the raw point cloud is not the final product. In order to retrieve information from the point cloud, additional processing is applied like classification, filtering, & modelling. Modelling especially always relies on “true” measurements, i.e., points derived from actual LiDAR measurements.

With the arrival of new technologies including LiDAR systems with focal plane arrays in general, point clouds are still acquired sequentially but a single laser pulse yields hundreds or thousands of raw range readings at the same time. This provides first intermediate point clouds with tremendous point density. However, these point clouds are extremely noisy compared to traditionally acquired intermediate point clouds in various ways. First of all, due to high sensitivity down to the single-photon level, these point clouds show a lot of “points in the air”, as they are sensitive to uncorrelated photons from the background radiation & due to so-called non-negligible dark-count rates. Secondly, the ranging in itself is prone to a significantly higher level of range noise.

In order to retrieve information from the point cloud still in a traditional way – by classification, filtering, and modelling – these point clouds have to be pre-processed in order to reduce noise significantly. Only these pre-processed point clouds are made available to the user. It seems feasible to get rid of “points in the air” quite reliably by applying spatial density analysis. However, reducing intrinsic range noise can only be tackled by some sort of spatial averaging trading point density against spatial noise & spatial resolution. Spatial averaging works fine when dealing with simple objects like extended planes but yields intrinsic smoothing at disruptive object features such as height changes at buildings & roofs & at small structures like low vegetation & aerials on rooftops.

If the pre-processed point cloud has to maintain sharp edges & ridges, certain assumptions for de-noising the initial point clouds have to be applied, which can be seen as pre-modelling. However, the applied model assumptions have a direct impact on the geometry & thus information content of the delivered point cloud.

We will discuss the impact of pre-modelling on delivered point clouds derived from very dense but noisy data as provided by LiDAR systems with focal-plane array detectors. We will compare pros & cons with respect to point clouds delivered by state-of-the-art LiDAR systems like Waveform LiDAR.

Track Name: LiDAR Data Processing
Session Date: Feb 7 2018 8:55 am – 9:20 am


© Diversified Communications. All rights reserved.