Code supplement: Towards National-Scale Ecological Applications: Harmonised framework for LiDAR point cloud processing for vegetation metrics calculation
Code supplement: Towards National-Scale Ecological Applications: Harmonised framework for LiDAR point cloud processing for vegetation metrics calculation
- Create 3 tables in Postgresql. The sql script can be found under /setup/db.
- create virtual environment ``` micromamba create -n myenv -f ./micromamba_env.yaml ```
- poetry install ``` micromamba activate myenv poetry install ```
- It will create a executable scripts under the env's bin folder which you can call directly.
-
- To run the processing in normal mode:
```
lidarprocessing -c -i
```
- To run the processing in recovery mode:
lidarprocessing -c <config yaml> -r <identifier>
The processing for laz file consist of 2 steps (/model/processing_script)
-
fix laz file fix_laz.py:
- Remove overlapping points
- Assign correct CRS (EPSG:3301)
- Writes out the fixed file in LAZ version 1.4
-
reclassification reclassify_laz_file.py
- Identification of points within sea polygons (Estonian Topographic Database (ETD) layer
E_201_meri_a) with the binary attributeWithinSeaassigned. - Identification of points within 13 m buffers of high voltage (110, 220 or 330 kV) powerlines (ETD layer
E_601_elektriliin_j) with the binary attributeWithinPowerlineassigned. - Identification of points within inland waterbody polygons (ETD layers
E_202_seisuveekogu_a,E_203_vooluveekogu_a) with the binary attributeWithinWaterBodyassigned. - Identification of points located within building polygons (ETD layers
E_401_hoone_kaandE_403_muu_rajatis_ka), with the binary attributeWithinBuildingassigned. - Calculating NDVI for each point by 1) using a seasonal NDVI raster as input for the PDAL function
filters.hag_demto calculate a temporaryHeightAboveGround(HAG) attribute by subtracting NDVI from Z, 2) copying this temporary HAG into a new attribute calledNDVIand 3) subtracting NDVI from Z to retrieve the original NDVI raster values. We used this workaround because there was no direct way to add NDVI values to the point cloud. - Calculation of actual
HeightAboveGroundusing the corresponding 1 m DTM (including support for VRT input) to overwrite the temporary HAG from the previous step. - Assignment of new classification values based on defined rules, leveraging attributes added in the previous steps, while preserving original classifications in the attribute
OriginalClassification.
- Identification of points within sea polygons (Estonian Topographic Database (ETD) layer
The above 2 steps can be run individually by :
-
# fix_laz_file.py input_file output_file python lidar_processor.model.processing_script.fix_laz_file 447696_2019_tava.laz 447696_2019_tava_fixed.laz
# reclassify_laz_file.py input_file output_file dem_file etak_file
python lidar_processor.model.processing_script.reclassify_laz_file 447696_2019_tava_fixed.laz 447696_2019_tava_reclassified.laz 54494_dem_1m_2017-2020.tif ETAK_EESTI_GPKG_2019_01_01/ETAK_EESTI_GPKG.gpkgSince there are over millions of laz files to process, so it is need to keep track on the state for each file. The batch processing consist of 4 state :
| state | process | failing state |
|---|---|---|
| 0 | initial (record creation) | Null |
| 1 | laz/dem file downloaded | -1 |
| 2 | laz file fixed | -2 |
| 3 | laz file reclassified | -3 |
And the state will change according to the following diagram
Config file example:
db:
dbname: 'dbname'
db_schema: 'lidar_processing'
user: 'user'
password: 'xxxxxx'
host: 'localhost'
port: 5432
storage:
gsjson: 'google_auth_json'
bucket: 'gs://bucket'
laz_path: 'test_lidar_processing/LAZ'
fix_path: 'test_lidar_processing/LAZ_fixed'
reclassify_path: 'test_lidar_processing/Reclassified'
dem_path: 'test_lidar_processing/DTM'
etak_path: './ETAK'
lidar:
laz_mapsheets: [ 'laz_map_sheets_#' ]
laz_to_crs: ''
laz_year: 2017
laz_type: 'tava'
dem_year: 2017
If any exception occurs during the batch processing, at the end of the log, it should shows that recovery is needed.
Then, to recover for that particular job, just run lidarprocessing with -r option follow by the identifier name.
