Quantcast
Viewing all 82 articles
Browse latest View live

Second German State Goes Open LiDAR

The floodgates of open geospatial data have opened in Germany. Days after reporting about the first state-wide release of open LiDAR, we are happy to follow up with a second wonderful open data story. The state of Thuringia (Thüringen) – also called the “green heart of Germany” – has also implemented an open geospatial data policy. This had already been announced in March 2016 but must have gone online just now. A reader of our last blog article pointed this out in the comments. And it’s not just LiDAR. You can download:

It all comes with the same permissible license as OpenNRW’s data. This is open data madness! Everything you could possibly hope for presented via a very functional download portal. Kudos to TLVermGeo (“Thüringisches Landesamt für Vermessung und Geoinformation”) for creating an open treasure cove of free-for-all geospatial data.

Let us have a look at the LiDAR. We use the interactive portal to zoom to an area of interest. With the recent rise of demagogues it cannot hurt to look at a stark reminder of where such demagoguery can lead. In his 1941 play “The Resistible Rise of Arturo Ui” – a satirical allegory on the rise of Adolf Hitler – Bertolt Brecht writes “… don’t rejoice too soon at your escape. The womb he crawled from is still going strong.”

Image may be NSFW.
Clik here to view.
Zooming in get us to the tiles.
Image may be NSFW.
Clik here to view.
The city of Weimar. famous for Goethe, Schiller, Bauhaus, and National Socialism.
Image may be NSFW.
Clik here to view.
We zoom to the nearby forested area called "Buchenwald" (meaning beech forest).
Image may be NSFW.
Clik here to view.
This is the site of the former Buchenwald concentration camp.

We are downloading LiDAR data around the Buchenwald concentration camp. According to Wikipedia, it was established in July 1937 and was one of the largest on German soil. Today the remains of Buchenwald serve as a memorial and as a permanent exhibition and museum.

We download the 15 tiles surrounding the blue one: two on its left, two on its right and one corresponding row of five tiles above and below. Each of the 15 zipped archives contains a *.laz file and *.meta file. The *.laz file contains the LiDAR points and *.meta file contains the textual information below where “Lage” and “Höhe” refer to “horizontal” and “vertical”:

Datei: las_655_5653_1_th_2010-2013.laz
Erfassungsdatum: 2011-03
Erfassungsmethode: Airborne Laserscanning
Lasergebiet: Laser_04_2010
EPSG-Code Lage: 25832
EPSG-Code Höhe: 5783
Quasigeoid: GCG2005
Genauigkeit Lage: 0.12m
Genauigkeit Höhe: 0.04m
Urheber: (c) GDI-Th, Freistaat Thueringen, TLVermGeo

Next we will run a few quality checks on the 15 tiles by processing them with lasinfolasoverlap, lasgrid, and las2dem. We output all results into a folder named ‘quality’.

With lasinfo we create one text file per tile that summarizes its contents. The ‘-cd’ option computes the all return and last return density. The ‘-histo point_source 1’ option produces a histogram of point source IDs that are supposed to store which flight line each return came from. The ‘-odir’ and ‘-odix’ options specify the directory for the output and an appendix to the output file name. The ‘-cores 4’ option starts 4 processes in parallel, each working on a different tile.

lasinfo  -i las_*2010-2013.laz ^
         -cd ^
         -histo point_source 1 ^
         -odir quality -odix _info -otxt ^
         -cores 4

If you scrutinize the resulting text files you will find that the average last return density ranges from 6.29 to 8.13 and that the point source IDs 1 and 9999 seem to encode some special points. Likely those are synthetic points added to improve the derived rasters similar to the “ab”, “ag”, and “aw” files in the OpenNRW LiDAR. Odd is the lack of intermediate returns despite return numbers ranging all the way up to 7. Looks like only the first returns and the last returns are made available (like for the OpenNRW LiDAR). That will make those a bit sad who were planning to use this LiDAR for forest or vegetation mapping. The header of the *.laz files does not store geo-referencing information, so we will have to enter that manually. And the classification codes do not follow the standard ASPRS assignment. In red is our (currently) best guess what these classification codes mean:

[...]
histogram of classification of points:
 887223 ground (2) ground
 305319 wire guard (13) building
 172 tower (15) bridges
 41 wire connector (16) synthetic ground under bridges
 12286 bridge deck (17) synthetic ground under building
 166 Reserved for ... (18) synthetic ground building edge
 5642801 Reserved for ... (20) non-ground
[...]

With lasoverlap we can visualize how much overlap the flight lines have and the (potential miss-)alignment between them. We drop the synthetic points with point source IDs 1 and 9999 and add geo-referencing information with ‘-epsg 25832’ so that the resulting images can be displayed as Google Earth overlays. The options ‘-min_diff 0.1’ and ‘-max_diff 0.4’ map elevation differences of up +/- 10 cm to white. Above +/- 10 cm the color becomes increasingly red/blue with full saturation at +/- 40 cm or higher. This difference can only be computed for pixels with two or more overlapping flight lines.

lasoverlap  -i las_*2010-2013.laz ^
            -drop_point_source 1 ^
            -drop_point_source 9999 ^
            -min_diff 0.1 -max_diff 0.4 ^
            -odir quality -opng ^
            -epsg 25832 ^
            -cores 4
Image may be NSFW.
Clik here to view.
Flight line overlap: blue = 1, aqua = 2, yellow = 3
Image may be NSFW.
Clik here to view.
Flight line alignment: white is good. red/blue is bad.

With lasgrid we check the density distribution of the laser pulses by computing the point density of the last returns for each 2 by 2 meter pixel and then mapping the computed density value to a false color that is blue for a density of 0 and red for a density of 10 or higher.

lasgrid  -i las_*2010-2013.laz ^
         -drop_point_source 1 ^
         -drop_point_source 9999 ^
         -keep_last ^
         -step 2 -point_density ^
         -false -set_min_max 0 10 ^
         -odir quality -odix _d_0_10 -opng ^
         -epsg 25832 ^
         -cores 4
Image may be NSFW.
Clik here to view.
Pulse density variation due to flight line overlap and flight turbulence.

Pulse density variation due to flight line overlap is expected. But also the contribution of flight turbulence is quite significant.

With las2dem we can check the quality of the already existing ground classification in the LiDAR by producing a hillshaded image of a DTM for visual inspection. Based on our initial guess on the classification codes (see above) we keep those synthetic points that improve the DTM (classification codes 16, 17, and 18) in addition to the ground points (classification code 2).

las2dem  -i las_*2010-2013.laz ^
         -keep_class 2 16 17 18 ^
         -step 1 ^
         -hillshade ^
         -odir quality -odix _shaded_dtm -opng ^
         -epsg 25832 ^
         -cores 4
Image may be NSFW.
Clik here to view.
Problems in the ground classification of LiDAR points are often visible in a hillshaded DTM raster.

Problems in the ground classification of LiDAR points are often visible in a hillshaded DTM.

Wow. We see a number of ground disturbances in the resulting hillshaded DTM. Some of them are expected because if you read up on the history of the Buchenwald concentration camp you will learn that in 1950 large parts of the camp were demolished. However, the laser finds the remnants of those barracks and buildings as clearly visible ground disturbances under the canopy of the dense forest that has grown there since. And then there are also these bumps that look like bomb craters. Are those from the American bombing raid on August 24, 1944?

Image may be NSFW.
Clik here to view.
Aerial images show mostly forest.
Image may be NSFW.
Clik here to view.
But the foundations of the destroyed buildings of the former concentration camp are visible in the DTM.
Image may be NSFW.
Clik here to view.
They match the building footprints of this visitor map to the memorial site.

We are still not entirely sure what those “bumps” arem but our initially assumption that all of those would have to be bomb craters from that fatal American bombing raid on August 24, 1944 seems to be wrong. Below is a close-up with lasview of the triangulated and shaded ground points from the lower right corner of tile ‘las_656_5654_1_th_2010-2013.laz’.

Image may be NSFW.
Clik here to view.
Close-up in lasview on the bumbs in the ground.

Close-up in lasview on the bumbs in the ground.

We are not sure if all the bumps we can see here are there for the same reason. But we found an old map and managed to overlay it on Google Earth. It suggest that at least the bigger bumps are not bomb craters. On the map they are labelled as “Erdfälle” which is German for “sink hole”.

Image may be NSFW.
Clik here to view.
The big bumps in the hillshaded DTM are not bomb craters.
Image may be NSFW.
Clik here to view.
They are labelled as "Erdfälle" (sink holes) in this older map.

We got a reminder on the danger of demagogues as well as a glimpse into conflict archaeology and geomorphology with this open LiDAR download and processing exercise. If you want to explore this area any further you can either download the LiDAR and download LAStools and process the data yourself or simply get our KML files here.

Acknowledgement: The LiDAR data of TLVermGeo comes with a very permissible license. It is called “Datenlizenz Deutschland – Namensnennung – Version 2.0” or “dl-de/by-2-0” and allows data and derivative sharing as well as commercial use. It only requires us to name the source. We need to cite the “geoportal-th.de (2017)” with the year of the download in brackets and should specify the Universal Resource Identification (URI). We have not found this yet and use this URL as a placeholder until we know the correct one. Done. So easy. Thank you, geoportal Thüringen … (-:


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Pre-Processing Mobile Rail LiDAR with LAStools

The majority of LAStools users are processing airborne LiDAR. That should not surprise as airborne is by far the most common form of LiDAR in terms of square kilometers covered. The availability of LiDAR as “open data” is also pretty much restricted to airborne surveys, which are often tax-payer funded and then distributed freely to achieve maximum return of investment.

But folks are increasingly using our software to do some of the “heavy lifting” for mobile LiDAR, either mounted on a truck for scanning cities or on a train for capturing railroad infrastructure. The LiDAR collected for the cities of Budapest and Singapore, for example, was pre-processed by multi-core scripted LAStools when the scanning trucks returned with their daily trajectories worth of point clouds captured by a RIEGL VMX-450 mobile mapping system.

One customer who was recently scanning railroad infrastructure wanted to do automatic ground classification as a first step prior to further segmentation of the data. We were asked for advice because on such data the standard settings of lasground left too many patches of ground unclassified. Also the uniform tiling lastile generates by default is not a good way to break such data into manageable pieces given the drastically varying point densities in mobile scanning.

We obtained a 217 MB file in LAZ format with 40 million points corresponding to a 2.7 km stretch of railway track. We first run a quick lasindex (with the options for ‘mobile’) on the file that creates a spatial indexing LAX file with maximally 10 meter resolution. This not only allows faster area-of-interest queries but also gives us a more detailed preview than just the bounding box of where the LiDAR points actually are in the GUI of LAStools.

Image may be NSFW.
Clik here to view.
mobile_rail_lidar_01

Presence of LAX files results in actual extend of LiDAR being shown in GUI.

lasindex -i segment.laz -tile_size 10 -maximum -100

We then run lastile four times to create an adaptive tiling in which no tile has more than 6 million points. The first call creates the initial 1000 by 1000 meter tiles. The following three calls refine all those tiles that still have more than 6 million points first into 500 by 500 meter, then 250 by 250 meter, and finally 125 by 125 meter tiles in parallel on 4 cores. Note the ‘-refine_tiling’ option is used in the first call to lastile and the ‘-refine_tiles’ option in all subsequent calls.

lastile -i segment.laz ^
        -tile_size 1000 ^
        -buffer 10 -flag_as_withheld ^
        -refine_tiling 6000000 ^
        -odir tiles_raw -o rail.laz
lastile -i tiles_raw\*_1000.laz ^
        -flag_as_withheld ^
        -refine_tiles 6000000 ^
        -olaz ^
        -cores 4
lastile -i tiles_raw\*_500.laz ^
        -flag_as_withheld ^
        -refine_tiles 6000000 ^
        -olaz ^
        -cores 4
lastile -i tiles_raw\*_250.laz ^
        -flag_as_withheld ^
        -refine_tiles 6000000 ^
        -olaz ^
        -cores 4

The resulting tiles all have fewer than 6 million points but still have the initial 10 meter buffer that was specified by the first call to lastile. Two tiles were sufficiently small after the 1st call, three tiles after the 2nd call, eleven tiles after 3rd call, and three tiles after the 4th.

Image may be NSFW.
Clik here to view.
contents of tile shown in blue in adaptive tiling below

points of adaptive tile (high-lighted in blue below) colored by intensity

Image may be NSFW.
Clik here to view.
Adaptive tiling created with four calls to lastile.

Adaptive tiling created with four calls to lastile. Scale factors of 0.00025 (see mouse cursor) implies that point coordinates are stored with quarter millimeter resolution. Lowering them to 0.001 would result in better compression and lower I/O.

Noise in the data – especially low noise – can lead lasground into choosing the wrong points during ground classification by latching on to those low noise points. We first classify the noise points into a different class (7) using lasnoise so we can later ignore them. These particular settings were found by experimenting on a few tiles with different values (see the README file) until visual inspection showed that most low points had been classified as noise.

lasnoise -i tiles_raw\*.laz ^
         -step_xy 0.5 -step_z 0.1 ^
         -odir tiles_denoised -olaz ^
         -cores 4
Image may be NSFW.
Clik here to view.
noise points shown in violett

noise points shown in violett

The points classified as noise will not be considered as ground points during the next step. For this it matters little that lamp posts, wires, or vegetation are wrongly marked as noise now. We can always undo their noise classification once the ground points were classified. Important is that those pointed to by the mouse cursor, which are below the desired ground, are excluded from consideration during the ground classification step. Here those low points are not actually noise but returns generated wherever the laser was able to “peek” through an opening to a lower surface.

lasground -i tiles_denoised\*.laz ^
          -ignore_class 7 ^
          -step 1 -sub 3 -bulge 0.1 -spike 0.1 -offset 0.02 ^
          -odir tiles_ground -olaz ^
          -cores 4

For classification with lasground there are a number of options to play with  (see the README file) but the most important is the correct step size. It is terrain along the railway track bed that is supposed to get represented well. The usual step of 5 to 40 meter for lasground aim at the removal of vegetation and man-made structures from airborne LiDAR. They are not the right choice here. A step of 1 and the parameters shown above gives us the ground shown below.

Image may be NSFW.
Clik here to view.
Classification of terrain along railway track using lasground with '-step 1'

Classification of terrain along railway track bed using lasground with ‘-step 1’

The new ‘-flag_as_withheld’ option in lastile that flags each point in the buffer with the withheld flag is useful in case we want to remove all buffer points on-the-fly, for example, in order to create a DTM hillshade of 25 cm resolution for a visual quality check of the entire 2.7 km track using blast2dem from the BLAST extension of LAStools.

blast2dem -i tiles_ground\*.laz -merged ^
          -drop_withheld -keep_class 2 ^
          -hillshade -step 0.25 ^
          -o dtm_hillshaded.png
Image may be NSFW.
Clik here to view.
Small 600 x 600 pixel detail of hill-shaded 5663 x 9619 pixel DTM raster generated by blast2dem

Small 600 x 600 pixel detail of hill-shaded 5663 x 9619 pixel DTM raster generated by blast2dem.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

LASmoons: Jesús García Sánchez

Jesús García Sánchez (recipient of three LASmoons)
Landscapes of Early Roman Colonization (LERC) project
Faculty of Archaeology, Leiden University, The Netherlands

Background:
Our project Landscapes of Early Roman Colonization (LERC) has been studying the hinterland of the Latin colony of Aesernia (Molise region, Italy) using several non-destructive techniques, chiefly artefactual survey, geophysics, and interpretation of aerial photographs. Nevertheless large areas of the territory are covered by the dense forests of the Matese mountains, a ridge belonging the Apennine chain, or covered by bushes due to the abandonment of the countryside. The project won’t be complete without integrating the marginal, remote and forested areas into our study of the Roman hinterland. Besides, it’s also relevant to discuss the feasibility of LiDAR data sets in the study of Mediterranean landscapes and its role within contemporary Landscape Archaeology.

Image may be NSFW.
Clik here to view.
some clever caption

LiDAR coverage in Molise region, Italy.

Goal:
+ to study in detail forested areas in the colonial hinterland of Aesernia.
+ to found the correct parameters of the classification algorithm to be able to locate possible archaeological structures or to document appropriately those we already known.
+ to document and create new visualization of hill-top fortified sites that belong to the indigenous population and are currently poorly studied due to inaccessibility and forest coverage (Monte San Paolo, Civitalla, Castelriporso, etc.)
+ to demonstrate the archaeological potential of LiDAR data in Italy and help other scholars to work with that kind of data, explaining basic information about data quality, where and how to acquire imagery and examples of application in archaeology. A paper entitled “Working with ALS – LiDAR data in Central South Italy. Tips and experiences”, will be presented in the International Mediterranean Survey Workshop by the end of February in Athens.

Image may be NSFW.
Clik here to view.

Civitella hillfort (Longano, IS) and its local context: ridges and forest belonging to the Materse mountains and the Appenines.

Data:
Recently the LERC project has acquired a large LiDAR dataset created by the Italian Geoportale Nazionale and the Minisstero dell’Ambiente e della Tutella del Territorio e del Mare. The data was produced originally to monitor land-slides and erosive risk.
The average point resolution is 1 meter.
+ The data sets were cropped originally in 1 sq km. tiles by the Geoportale Nazionale for distribution purposes.

LAStools processing:
1) data is provided in *.txt files thus the first step is to create appropriate LAS files to work with [txt2las]
2) combine areas of circa 16 sq km (still fewer than 20 million points to be processed in one piece with LAStools) in the surroundings of the colony of Aesernia and in the Matese mountains [lasmerge]
3) assign the correct projection to the data [lasmerge or las2las]
4) extract the care-earth with the best-fitting parameters [lasground or lasground_new]
5) create bare-earth terrain rasters as a first step to visualize and analyze the area [lasdem]


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

LASmoons: Gudrun Norstedt

Gudrun Norstedt (recipient of three LASmoons)
Forest History, Department of Forest Ecology and Management
Swedish University of Agricultural Sciences, Umeå, Sweden

Background:
Until the end of the 17th century, the vast boreal forests of the interior of northern Sweden were exclusively populated by the indigenous Sami. When settlers of Swedish and Finnish ethnicity started to move into the area, colonization was fast. Although there is still a prospering reindeer herding Sami culture in northern Sweden, the old Sami culture that dominated the boreal forest for centuries or even millenia is to a large extent forgotten.
Since each forest Sami family formerly had a number of seasonal settlements, the density of settlements must have been high. However, only very few remains are known today. In the field, old Sami settlements can be recognized through the presence of for example stone hearths, storage caches, pits for roasting pine bark, foundations of certain types of huts, reindeer pens, and fences. Researchers of the Forest History section of the Department of Forest Ecology and Management have long been surveying such remains on foot. This, however, is extremely time consuming and can only be done in limited areas. Also, the use of aerial photographs is usually difficult due to dense vegetation. Data from airborne laser scanning should be the best way to find remains of the old forest Sami culture. Previous research has shown the possibilities of using airborne laser scanning data for detecting cultural remains in the boreal forest (Jansson et al., 2009; Koivisto & Laulamaa, 2012; Risbøl et al., 2013), but no studies have aimed at detecting remains of the forest Sami culture. I want to test the possibilities of ALS in this respect.

Image may be NSFW.
Clik here to view.

DTM from the Krycklan catchment, showing a row of hunting pits and (larger) a tar pit.

Goal:
The goal of my study is to test the potential of using LiDAR data for detecting cultural and archaeological remains on the ground in a forest area where Sami have been known to dwell during historical times. Since the whole of Sweden is currently being scanned by the National Land Survey, this data will be included. However, the average point density of the national data is only 0,5–1 pulses/m^2. Therefore, the study will be done in an established research area, the Krycklan catchment, where a denser scanning was performed in 2015. The Krycklan data set lacks ground point classification, so I will have to perform such a classification before I can proceed to the creation of a DTM. Having tested various kind of software, I have found that LAStools seems to be the most efficient way to do the job. This, in turn, has made me aware of the importance of choosing the right methods and parameters for doing a classification that is suitable for archaeological purposes.

Data:
The data was acquired with a multi-spectral airborne LiDAR sensor, the Optech Titan, and a Micro IRS IMU, operated on an aircraft flying at a height of about 1000 m and positioning was post-processed with the TerraPos software for higher accuracy.
The average pulse density is 20 pulse/m^2.
+ About 7 000 hectares were covered by the scanning. The data is stored in 489 tiles.

LAStools processing:
1) run a series of classifications of a few selected tiles with both lasground and lasground_new with various parameters [lasground and lasground_new]
2) test the outcomes by comparing it to known terrain to find out the optimal parameters for classifying this particular LiDAR point cloud for archaeological purposes.
3) extract the bare-earth of all tiles (using buffers!!!) with the best parameters [lasground or lasground_new]
4) create bare-earth terrain rasters (DTMs) and analyze the area [lasdem]
5) reclassify the airborne LiDAR data collected by the National Land Survey using various parameters to see whether it can become more suitable for revealing Sami cultural remains in a boreal forest landscape  [lasground or lasground_new]

References:
Jansson, J., Alexander, B. & Söderman, U. 2009. Laserskanning från flyg och fornlämningar i skog. Länsstyrelsen Dalarna (PDF).
Koivisto, S. & Laulamaa, V. 2012. Pistepilvessä – Metsien arkeologiset kohteet LiDAR-ilmalaserkeilausaineistoissa. Arkeologipäivät 2012 (PDF).
Risbøl, O., Bollandsås, O.M., Nesbakken, A., Ørka, H.O., Næsset, E., Gobakken, T. 2013. Interpreting cultural remains in airborne laser scanning generated digital terrain models: effects of size and shape on detection success rates. Journal of Archaeological Science 40:4688–4700.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

LASmoons: Marzena Wicht

Marzena Wicht (recipient of three LASmoons)
Department of Photogrammetry, Remote Sensing and GIS
Warsaw University of Technology, Poland.

Background:
More than half of human population (Heilig 2012) suffers from many negative effects of living in cities: increased air pollution, limited access to the green areas, Urban Heat Island (UHI) and many more. To mitigate some of these effects, many ideas came up over the years: reducing the surface albedo, the idea of the Garden City, green belts, and so on. Increasing horizontal wind speed might actually improve both, the air pollution dispersion and the thermal comfort in urban areas (Gál & Unger 2009). Areas of low roughness promote air flow – discharging the city from warm, polluted air and supplying it with cool and fresh air – if they share specific parameters, are connected and penetrate the inner city with a country breeze. That is why mapping low roughness urban areas is important in better understanding urban climate.

Goal:
The goal of this study is to derive buildings (outlines and height) and high vegetation using LAStools and to use that data in mapping urban ventilation corridors for our case study area in Warsaw. There are many ways to map these; however using ALS data has certain advantages (Suder& Szymanowski 2014) in this case: DSMs can be easily derived, tree canopy (incl. height) can be joined to the analysis and buildings can be easily extracted. The outputs are then used as a basis for morphological analysis, like calculating frontal area index. LAStools has the considerable advantage of processing large quantities of data (~500 GB) efficiently.

Image may be NSFW.
Clik here to view.

Frontal area index calculation based on 3D building database

Data:
+ LiDAR provided by Central Documentation Center of Geodesy and Cartography
+ average pulse density 12 p/m^2
+ covers 517 km^2 (whole Warsaw)

LAStools processing:
1) quality checking of the data as described in several videos and blog posts [lasinfo, lasvalidate, lasoverlap, lasgrid, lasduplicate, lasreturnlas2dem]
2) reorganize data into sufficiently small tiles with buffers to avoid edge artifacts [lastile]
3) classify point clouds into vegetation and buildings [lasground, lasclassify]
4) normalize LiDAR heights [lasheight]
5) create triangulated, rasterized derivatives: DSM / DTM / nDSM / CHM [las2dem, blast2dem]
6) compute height-based metrics (e.g. ‘-avg’, ‘-std’, and ‘-p 50’) [lascanopy]
7) generate subsets during the workflow [lasclip]
8) generate building footprints [lasboundary]

References:
Heilig, G. K. (2012). World urbanization prospects: the 2011 revision. United Nations, Department of Economic and Social Affairs (DESA), Population Division, Population Estimates and Projections Section, New York.
Gal, T., & Unger, J. (2009). Detection of ventilation paths using high-resolution roughness parameter mapping in a large urban area. Building and Environment, 44(1), 198-206.
Suder, A., & Szymanowski, M. (2014). Determination of ventilation channels in urban area: A case study of Wroclaw (Poland). Pure and Applied Geophysics, 171(6), 965-975.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Integrating External Ground Points in Forests to Improve DTM from Dense-Matching Photogrammetry

The biggest problem of generating a Digital Terrain Model (DTM) from the photogrammetric point clouds that are produced from aerial imagery with dense-matching software such as SURE, Pix4D, or Photoscan is dense vegetation: when plants completely cover the terrain not a single point is generated on the ground. This is different for LiDAR point clouds as the laser can even penetrate dense multi-level tropical forests. The complete lack of ground points in larger vegetated areas such as closed forests or dense plantations means that the many processing workflows for vegetation analysis that have been developed for LiDAR cannot be used for photogrammetric point clouds  … unless … well unless we are getting those missing ground points some other way. In the following we see how to integrate external ground points to generate a reasonable DTM under a dense forest with LAStools. See this, this, this, this, and this article for further reading.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Here you can download the dense matching point cloud, the manually collected ground points, and the forest stand delineating polygon that we are using in the following example work flow:

We leave the usual inspection of the content with lasinfo and lasview that we always recommend on newly obtained data as an exercise to the reader. Using las2dem and lasgrid we created the Google Earth overlays shown above to visualize the extent of the dense matched point cloud and the distribution of the manually collected ground points:

las2dem -i DenseMatching.laz ^
        -thin_with_grid 1.0 ^
        -extra_pass ^
        -step 2.0 ^
        -hillshade ^
        -odix _hill_2m -opng

lasgrid -i ManualGround.laz ^
        -set_RGB 255 0 0 ^
        -step 10 -rgb ^
        -odix _grid_10m -opng

Attempts to ground-classify the dense matching point cloud directly are futile as there are no ground points under the canopy in the heavily forested area. Therefore 558 ground points were manually surveyed in the forest of interest that are around 50 to 120 meters apart from another. We show how to integrate these points into the dense matching point cloud such that we can successfully extract bare-earth information from the data.

In the first step we “densify” the manually collected ground points by interpolating them with triangles onto a raster of 2 meter resolution that we store as LAZ points with las2dem. You could consider other interpolation schemes to “densify” the ground points, here we use simple linear interpolation to prove the concept. Due to the varying distance between the manually surveyed ground points we allow interpolating triangles with edge lengths of up to 125 meters. These triangles then also cover narrow open areas next to the forest, so we clip the interpolated ground points against the forest stand delineating polygon with lasclip to classify those points that are really in the forest as “key points” (class 8) and all others as “noise” (class 7).

las2dem -i ManualGround.laz ^
        -step 2 ^
        -kill 125 ^
        -odix _2m -olaz

lasclip -i ManualGround_2m.laz ^
        -set_classification 7 ^
        -poly forest.shp ^
        -classify_as 8 -interior ^
        -odix _forest -olaz

Below we show the resulting densified ground points colored by elevation that survive the clipping against the forest stand delineating polygon and were classified as “key points” (class 8). The interpolated ground points in narrow open areas next to the forest that fall outside this polygon were classified as “noise” (class 7) and are shown in violet. They will be dropped in the next step.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

We then merge the dense matching points with the densified manual ground points (while dropping all the violet points marked as noise) as input to lasthin and reclassify the lowest point per 1 meter by 1 meter with a temporary code (here we use class 9 that usually refers to “water”). Only the subset of lowest points that receives the temporary classification code 9 will be used for ground classification later.

lasthin -i DenseMatching.laz ^
        -i ManualGround_2m_forest.laz ^
        -drop_class 7 ^
        -merged ^
        -lowest -step 1 -classify_as 9 ^
        -o DenseMatchingAndDensifiedGround.laz

We use the GUI of lasview to pick several interesting areas for visual inspection. The selected points load much faster when the LAZ file is spatially indexed and therefore we first run lasindex. For better orientation we also load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i DenseMatchingAndDensifiedGround.laz

lasview -i DenseMatchingAndDensifiedGround.laz -gui

We pick the area shown below that contains the target forest with manually collected and densified ground points and a forested area with only dense matching points. The difference could not be more drastic as the visualizations show.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Now we run ground classification using lasground with option ‘-town’ using only the points with the temporary code 9 by ignoring all other classifications 0 and 8 in the file. We leave the temporary classification code 9 unchanged for all the points that were not classified with “ground” code 2 so we can visualize later which ones those are.

lasground -i DenseMatchingAndDensifiedGround.laz ^
          -ignore_class 0 8 ^
          -town ^
          -non_ground_unchanged ^
          -o GroundClassified.laz

We again use the GUI of lasview to pick several interesting areas after running lasindex and again load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i GroundClassified.laz

lasview -i GroundClassified.laz -gui

We pick the area shown below that contains all three scenarios: the target forest with manually collected and densified ground points, an open area with only dense matching points, and a forested area with only dense matching points. The result is as expected: in the target forest the manually collected ground points are used as ground and in the open area the dense-matching points are used as ground. But there is no useful ground in the other forested area.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Now we can compute the heights of the points above ground for our target forest with lasheight and either replace the z elevations in the file of store them separately as “extra bytes”. Then we can compute, for example, a Canopy Height Model (CHM) that color codes the height of the vegetation above the ground with lasgrid. Of course this will only be correct in the target forest where we have “good” ground but not in the other forested areas. We also compute a hillshaded DTM to be able to visually inspect the topography of the generated terrain model.

lasheight -i GroundClassified.laz ^
          -store_as_extra_bytes ^
          -o GroundClassifiedWithHeights.laz

lasgrid -i GroundClassifiedWithHeights.laz ^
        -step 2 ^
        -highest -attribute 0 ^
        -false -set_min_max 0 25 ^
        -o chm.png

las2dem -i GroundClassified.laz ^
        -keep_class 2 -extra_pass ^
        -step 2 ^
        -hillshade ^
        -o dtm.png

Here you can download the resulting color-coded CHM and the resulting hill-shaded DTM as Google Earth KMZ overlays. Clearly the resulting CHM is only meaningful in the target forest where we used the manually collected ground points to create a reasonable DTM. In the other forested areas the ground is only correct near the forest edges and gets worse with increasing distance from open areas. The resulting DTM exhibits some interesting looking  bumps in the middle of areas with manually collected ground point. Those are a result of using the dense-matching points as ground whenever their elevation is lower than that of the manually collected points (which is decided in the lasthin step). Whether those bumps represent true elevations of are artifacts of low erroneous elevation from dense-matching remains to be investigated.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

For forests on complex and steep terrain the number of ground points that needs to be manually collected may make such an approach infeasible in practice. However, maybe you have another source of elevation, such as a low-resolution DTM of 10 or 25 meter provided by your local government. Or maybe even a high resolution DTM of 1 or 2 meter from a LiDAR survey you did several years ago. While the forest may have grown a lot in the past years, the ground under the forest will probably not have changed much …


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Removing Excessive Low Noise from Dense-Matching Point Clouds

Point clouds produced with dense-matching by photogrammetry software such as SURE, Pix4D, or Photoscan can include a fair amount of the kind of “low noise” as seen below. Low noise causes trouble when attempting to construct a Digital Terrain Model (DTM) from the points as common algorithm for classifying points into ground and non-ground points – such as lasground – tend to “latch onto” those low points, thereby producing a poor representation of the terrain. This blog post describes one possible LAStools workflow for eliminating excessive low noise. It was developed after a question in the LAStools user forum by LASmoons holder Muriel Lavy who was able to share her noisy data with us. See this, this, this, thisthis, and this blog post for further reading on this topic.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Here you can download the dense matching point cloud that we are using in the following work flow:

We leave the usual inspection of the content with lasinfolasview, and lasvalidate that we always recommend on newly obtained data as an exercise to the reader. Note that a check for proper alignment of flightlines with lasoverlap that we consider mandatory for LiDAR data is not applicable for dense-matching points.

With lastile we turn the original file with 87,261,083 points into many smaller 500 by 500 meter tiles for efficient multi-core processing. Each tile is given a 25 meter buffer to avoid edge artifacts. The buffer points are marked as withheld for easier on-the-fly removal. We add a (terser) description of the WGS84 UTM zone 32N to each tile via the corresponding EPSG code 32632:
lastile -i muriel\20161127_Pancalieri_UTM.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -epsg 32632 ^
        -odir muriel\tiles_raw -o panca.laz
Because dense-matching points often have a poor point order in the files they get delivered in we use lassort to rearrange them into a space-filling curve order as this will speed up most following processing steps:
lassort -i muriel\tiles_raw\panca*.laz ^
        -odir muriel\tiles_sorted -olaz ^
        -cores 7
We then run lasthin to reclassify the highest point of every 2.5 by 2.5 meter grid cell with classification code 8. As the spacing of the dense-matched points is around 40 cm in both x and y, around 40 points will fall into each such grid cell from which the highest is then classified as 8:
lasthin -i muriel\tiles_sorted\panca*.laz ^
        -step 2.5 ^
        -highest -classify_as 8 ^
        -odir muriel\tiles_thinned -olaz ^
        -cores 7
Considering only those points classified as 8 in the last step we then run lasnoise to find points that are highly isolated in wide and flat neighborhoods that are then reclassified as 7. See the README file of lasnoise for a detailed explanation of the different parameters:
lasnoise -i muriel\tiles_thinned\panca*.laz ^
         -ignore_class 0 ^
         -step_xy 5 -step_z 0.1 -isolated 4 ^
         -classify_as 7 ^
         -odir muriel\tiles_isolated -olaz ^
         -cores 7
Now we run a temporary ground classification of only (!!!) on those points that are still classified as 8 using the default parameters of lasground. Hence we only use the points that were the highest points on the 2.5 by 2.5 meter grid and that were not classified as noise in the previous step. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_isolated\panca*.laz ^
          -city -ultra_fine -ignore_class 0 7 ^
          -odir muriel\tiles_temp_ground -olaz ^
          -cores 7
The result of this temporary ground filtering is then merely used to mark all points that are 0.5 meter below the triangulated TIN of these temporary ground points with classification code 12 using lasheight. See the README file of lasheight for a detailed explanation of the different parameters:
lasheight -i muriel\tiles_temp_ground\panca*.laz ^
          -do_not_store_in_user_data ^
          -classify_below -0.5 12 ^
          -odir muriel\tiles_temp_denoised -olaz ^
          -cores 7
In the resulting tiles the low noise (but also many points above the ground) are now marked and in a final step we produce properly classified denoised tiles by re-mapping the temporary classification codes to conventions that are more consistent with the ASPRS LAS specification using las2las:
las2las -i muriel\tiles_temp_denoised\panca*.laz ^
        -change_classification_from_to 1 0 ^
        -change_classification_from_to 2 0 ^
        -change_classification_from_to 7 0 ^
        -change_classification_from_to 12 7 ^
        -odir muriel\tiles_denoised -olaz ^
        -cores 7
Let us visually check what each of the above steps has produced by zooming in on a 300 meter by 100 meter strip of points with the bounding box (388500,4963125) to (388800,4963225) in tile ‘panca_388500_4963000.laz’:
The final classification of all points that are not already classified as noise (7) into ground (2) or non-ground (1) was done with a final run of lasground. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_denoised\panca*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -odir muriel\tiles_ground -olaz ^
          -cores 7
Then we create a seamless hill-shaded DTM tiles by triangulating all the points classified as ground into a temporary TIN (including those in the 25 meter buffer) and then rasterizing only the inner 500 meter by 500 meter of each tile with option ‘-use_tile_bb’ of las2dem. For more details on the importance of buffers in tile-based processing see this blog post here.
las2dem -i muriel\tiles_ground\panca*.laz ^
        -keep_class 2 ^
        -step 1 -hillshade ^
        -use_tile_bb ^
        -odir muriel\tiles_dtm -opng ^
        -cores 7

And here the original DSM side-by-side with resulting DTM after low noise removal. One dense forested area near the center of the data was not entirely removed due to the lack of ground points in this area. Integrating external ground points or manual editing with lasview are two possible way to rectify these few remaining errors …

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

LASmoons: Huaibo Mu

Huaibo Mu (recipient of three LASmoons)
Environmental Mapping, Department of Geography
University College London (UCL), UK

Background:
This study is a part of the EU-funded Metrology for Earth Observation and Climate project (MetEOC-2). It aims to combine terrestrial and airborne LiDAR data to estimate biomass and allometry for woodland trees in the UK. Airborne LiDAR can capture large amounts of data over large areas, while terrestrial LiDAR can provide much more details of high quality in specific areas. The biomass and allometry for individual specific tree species in 1 ha of Wytham Woods located about 5km north west of the University of Oxford, UK are estimated by combining both airborne and terrestrial LiDAR. Then the bias will be evaluated when estimation are applied on different levels: terrestrial LiDAR level, tree level, and area level. The goal are better insights and a controllable error range in the bias of biomass and allometry estimates for woodland trees based on airborne LiDAR.

Image may be NSFW.
Clik here to view.

Goal:
The study aims to find the suitable parameters of allometric equations for different specific species and establish the relationship between the tree height and stem diameter and crown diameter to be able to estimate the above ground biomass using airborne LiDAR. The biomass estimates under different levels are then compared to evaluate the bias and for the total 6ha of Wytham Woods for calibration and validation. Finally the results are to be applied to other woodlands in the UK. The LiDAR processing tasks for which LAStools are used mainly center around the creation of suitable a Canopy Height Model (CHM) from the airborne LiDAR.

Data:
+ Airborne LiDAR data produced by Professor David Coomes (University of Cambridge) with Airborne Research and Survey Facility (ARSF) Project code of RG13_08 in June 2014. The average point density is about 5.886 per m^2.
+ Terrestrial LiDAR data collected by UCL’s team leader by Dr. Mat Disney and Dr. Kim Calders in order to develop very detailed 3D models of the trees.
+ Fieldwork from the project “Initial Results from Establishment of a Long-term Broadleaf Monitoring Plot at Wytham Woods, Oxford, UK” by Butt et al. (2009).

LAStools processing:
1) check LiDAR quality as described in these videos and articles [lasinfo, lasvalidate, lasoverlap, lasgrid, las2dem]
2) classify into ground and non-ground points using tile-based processing  [lastile, lasground]
3) generate a Digital Terrain Model (DTM) [las2dem]
4) compute height of points and delete points higher than maximum tree height obtained from terrestrial LiDAR [lasheight]
5) convert points into disks with 10 cm diameter to conservatively account for laser beam width [lasthin]
6) generate spike-free Digital Surface Model (DSM) based on algorithm by Khosravipour et al. (2016) [las2dem]
7) create Canopy Height Model (CHM) by subtracting DTM from spike-free DSM [lasheight].

References:
Butt, N., Campbell, G., Malhi, Y., Morecroft, M., Fenn, K., & Thomas, M. (2009). Initial results from establishment of a long-term broadleaf monitoring plot at Wytham Woods, Oxford, UK. University Oxford, Oxford, UK, Rep.
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., (2014). Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.
Khosravipour, A., Skidmore, A.K., Isenburg, M. and Wang, T.J. (2015) Development of an algorithm to generate pit-free Digital Surface Models from LiDAR, Proceedings of SilviLaser 2015, pp. 247-249, September 2015.
Khosravipour, A., Skidmore, A.K., Isenburg, M (2016) Generating spike-free Digital Surface Models using raw LiDAR point clouds: a new approach for forestry applications, (journal manuscript under review).


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Scotland’s LiDAR goes Open Data (too)

Following the lead of England and Wales, the Scottish LiDAR is now also open data. The implementation of such an open geospatial policy in the United Kingdom was spear-headed by the Environment Agency of England who started to make all of their LiDAR holdings available as open data. In September 2015 they opened DTM and DSM raster derivatives down to 25 cm resolution and in March 2016 also the raw point clouds went online our compressed and open LAZ format (more info here) – all with the very permissible Open Government Licence v3. This treasure cove of geospatial data was collected by the Environment Agency Geomatics own survey aircraft mainly for flood mapping purposes. The data that had been access restricted for the past 17 years of operation and was made open only after it was shown that restricting access in order to recover costs to finance future operations – a common argument for withholding tax-payer funded data – was nothing but an utter myth. This open data policy has resulted in an incredible re-use of the LiDAR and the Environment Agency has literally been propelled into the role of a “champion for open data” inspiring Wales (possibly the German states of North-Rhine Westfalia and Thuringia) and now also Scotland to open up their geospatial archives as well …

Image may be NSFW.
Clik here to view.

Huge LAS files available for download from the Scottish Open Data portal.

We went to the nice online portal of Scotland to download three files from the Phase II LiDAR for Scotland that are provided as uncompressed LAS files, namely LAS_NN45NE.las, LAS_NN55NE.las, and LAS_NN55NW.las, whose sizes are listed as 1.2 GB, 2.8 GB, and 4.7 GB in the screenshot above. Needless to say that it took quite some time and several restarts (using wget with option ‘-c’) to successfully download these very large LAS files.

laszip -i LAS_NN45NE.las -odix _cm -olaz -rescale 0.01 0.01 0.01 
laszip -i LAS_NN45NE.las -odix _mm -olaz
laszip -i LAS_NN55NE.las -odix _cm -olaz -rescale 0.01 0.01 0.01 
laszip -i LAS_NN55NE.las -odix _mm -olaz
laszip -i LAS_NN55NW.las -odix _cm -olaz -rescale 0.01 0.01 0.01 
laszip -i LAS_NN55NW.las -odix _mm -olaz

After downloading we decided to see how well these files compress with LASzip by running the six commands shown above creating LAZ files when re-scaling of coordinate resolution to centimeter (cm) and LAZ files with the original millimeter (mm) coordinate resolution (i.e. the original scale factors are 0.001 which is somewhat excessive for aerial LiDAR where the error in position per coordinate is typically between 5 cm and 20 cm). Below you see the resulting file sizes for the three different files.

 1,164,141,247 LAS_NN45NE.las
   124,351,690 LAS_NN45NE_cm.laz (1 : 9.4)
   146,651,719 LAS_NN45NE_mm.laz (1 : 7.9)
 2,833,123,863 LAS_NN55NE.las
   396,521,115 LAS_NN55NE_cm.laz (1 : 7.1)
   474,767,495 LAS_NN55NE_mm.laz (1 : 6.0)
 4,664,782,671 LAS_NN55NW.las
   531,454,473 LAS_NN55NW_cm.laz (1 : 8.8)
   629,141,151 LAS_NN55NW_mm.laz (1 : 7.4)

The savings in download time and storage space of storing the LiDAR in LAZ versus LAS are sixfold to tenfold. If I was a tax payer in Scotland and if my government was hosting open data on in the Amazon cloud (i.e. paying for AWS cloud services with my taxes) I would encourage them to store their data in a more compressed format. Some more details on the data.

According to the provided meta data, the Scottish Public Sector LiDAR Phase II dataset was commissioned by the Scottish Government in response to the Flood Risk Management Act (2009). The project was managed by Sniffer and the contract was awarded to Fugro BKS. Airborne LiDAR data was collected for 66 sites (the dataset does not have full national coverage) totaling 3,516 km^2 between 29th November 2012 and 18th April 2014. The point density was a minimum of 1 point/sqm, and approximately 2 points/sqm on average. A DTM and DSM were produced from the point clouds, with 1m spatial resolution. The Coordinate reference system is OSGB 1936 / British National Grid (EPSG code 27700). The data is licensed under an Open Government Licence. However, under the use constraints section it now only states that the following attribution statement must be used to acknowledge the source of the information: “Copyright Scottish Government and SEPA (2014)” but also that Fugro retain the commercial copyright, which is somewhat disconcerting and may require more clarification. According to this tweet a lesser license (NCGL) applies to the raw LiDAR point clouds. Below a lasinfo report for the large LAS_NN55NW.las as well as several visualizations with lasview.

lasinfo (170915) report for LAS_NN55NW.las
reporting all LAS header entries:
 file signature: 'LASF'
 file source ID: 0
 global_encoding: 1
 project ID GUID data 1-4: 00000000-0000-0000-0000-000000000000
 version major.minor: 1.2
 system identifier: 'Riegl LMS-Q'
 generating software: 'Fugro LAS Processor'
 file creation day/year: 343/2016
 header size: 227
 offset to point data: 227
 number var. length records: 0
 point data format: 1
 point data record length: 28
 number of point records: 166599373
 number of points by return: 149685204 14102522 2531075 280572 0
 scale factor x y z: 0.001 0.001 0.001
 offset x y z: 250050 755050 270
 min x y z: 250000.000 755000.000 203.731
 max x y z: 254999.999 759999.999 491.901
reporting minimum and maximum for all LAS point record entries ...
 X -50000 4949999
 Y -50000 4949999
 Z -66269 221901
 intensity 39 2046
 return_number 1 4
 number_of_returns 1 4
 edge_of_flight_line 0 1
 scan_direction_flag 1 1
 classification 1 11
 scan_angle_rank -30 30
 user_data 0 3
 point_source_ID 66 91
 gps_time 38230669.389034 38402435.753789
number of first returns: 149685204
number of intermediate returns: 2813604
number of last returns: 149687616
number of single returns: 135599244
overview over number of returns of given pulse: 135599244 23122229 6754118 1123782 0 0 0
histogram of classification of points:
 287819 unclassified (1)
 109019874 ground (2)
 14476880 low vegetation (3)
 3487218 medium vegetation (4)
 39141518 high vegetation (5)
 165340 building (6)
 13508 rail (10)
 7216 road surface (11)
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Kudos to the Scottish government for opening their data. We hereby acknowledge the source of the LiDAR that we have used in the experiments above as “Copyright Scottish Government and SEPA (2014)”.

LASmoons: Chris J. Chandler

Chris J. Chandler (recipient of three LASmoons)
School of Geography
University of Nottingham, UNITED KINGDOM

Background:
Wetlands provide a range of important ecosystem services: they store carbon, regulate greenhouse gas emissions, provide flood protection as well as water storage and purification. Preserving these services is critical to achieve sustainable environmental management. Currently, mangrove forests are protected in Mexico, however, fresh water wetland forests, which also have high capacity for storing carbon both in the trees and in the soil, are not protected under present legislation. As a result, coastal wetlands in Mexico are threatened by conversion to grazing areas, drainage for urban development and pollution. Given these threats, there is an urgent need to understand the current state and distribution of wetlands to inform policy and protect the ecosystem services provided by these wetlands.
In this project we will combine field data collection, satellite data (i.e. optical remote sensing, radar and LiDAR remote sensing) and modelling to provide an integrated technology for assessing the value of a range of ecosystem services, tested to proof of concept stage based on carbon storage. The outcome of the project will be a tool for mapping the value of a range of ecosystem services. These maps will be made directly available to local stakeholders including policy makers and land users to inform policy regarding forest protection/legislation and aid development of financial incentives for local communities to protect these services.

Image may be NSFW.
Clik here to view.

Wetland classification in the Chiapas region of Mexico

Goal:
At this stage of the project we have characterized wetlands for three priority areas in Mexico (Pantanos de Centla, La Encrucijada and La Mancha). Next stage is the up scaling of the field data at the three study sites using LiDAR data for producing high quality Canopy Height Model (CHM), which has been of great importance for biomass estimation (Ferraz et al., 2016). A high quality CHM will be achieved using LAStools software.

Data:
+
LiDAR provided by the Mexican National Institute of Statistics and Geography (INEGI)
+ average height: 5500 m, mirror angle: +/- 30 degrees, speed: 190 knots
+ collected with Cessna 441, Conquest II system at 1 pts/m².

LAStools processing:
1)
create 1000 meter tiles with 35 meter buffer to avoid edge artifacts [lastile]
2) classify point clouds into ground and non-ground [lasground]
3) normalize height of points above the ground [lasheight]
4) create a Digital Terrain and Surface Model (DTM and DSM) [las2dem]
5) generate a spike-free Canopy Height Model (CHM) as described here and here [las2dem]
6) compute various metrics for each plot and the normalized tiles [lascanopy]

References:
Ferraz, A., Saatchi, S., Mallet, C., Jacquemoud S., Gonçalves G., Silva C.A., Soares P., Tomé, M. and Pereira, L. (2016). Airborne Lidar Estimation of Aboveground Forest Biomass in the Absence of Field Inventory. Remote Sensing, 8(8), 653.

LASmoons: Manuel Jurado

Manuel Jurado (recipient of three LASmoons)
Departamento de Ingeniería Topográfica y Cartografía
Universidad Politécnica de Madrid, SPAIN

Background:
The availability of LiDAR data is creating a lot of innovative possibilities in different fields of science, education, and other field of interests. One of the areas that has been deeply impacted by LiDAR is cartography and in particular one highly specialized sub-field of cartography in the domain of recreational and professional orienteering running: the production of high-quality maps for orienteering races (Ditz et al., 2014). These are thematic maps with a lot of fine detail which demands many hours of field work for the map maker. In order to reduce the fieldwork, LiDAR data obtained from Airborne Research Australia (ARA) is going to be used in order to obtain DEM and to extract features that must be included in these maps. The data will be filtered and processed with the help of LAStools.

Image may be NSFW.
Clik here to view.

Final map with symbolism typical for use in orienteering running

Goal:
The goal of this project is to extract either point (boulders, mounds), linear (contours, erosion gullies, cliffs) and area features (vegetation density) that should be drawn in a orienteering map derived from high-resolution LiDAR. Different LiDAR derived raster images are being created: 0.5m DTM, vegetation density (J. Ryyppo, 2013), slope, Sky-View factor (Ž. Kokalj et al., 2011), and shaded relief. The area used is in Renmark, South Australia and the produced map is going to be used for the Australian Orienteering Championships 2018.

Image may be NSFW.
Clik here to view.

Sky-View factor of DTM for same area as shown above.

Data:
+
4 square kilometers of airborne LiDAR data produced by Airborne Research Australia at 18 pulses per square meter using the full waveform scanning LiDAR Q680i-S laser scanner from RIEGL
+ 60 hours of check and validation work in the field

LAStools processing:
1) tile into 500 by 500 meter tiles with 20 meter buffer [lastile]
2) classify isolated points as noise [lasnoise]
3) classify point clouds into ground and non-ground [lasground]
4) create a Digital Terrain Model (DTM) [las2dem]
5) normalize height of points above the ground [lasheight]
6) compute vegetation density metrics [lascanopy]
7) create hillshades of the raster DTMs [blast2dem or GDAL]

References:
Ditz, Robert, Franz Glaner, and Georg Gartner. (2014). “Laser Scanning and Orienteering Maps.” Scientific Journal of Orienteering 19.1.
JRyyppo, Jarkko. (2013). “Karttapullautin vegetation mapping guide”.
Kokalj, Žiga, Zaksek, Klemen, and Oštir, Krištof. (2011). Application of sky-view factor for the visualization of historic landscape features in lidar-derived relief models. Antiquity. 85. 263-273.

LASmoons: Sebastian Kasanmascheff

Sebastian Kasanmascheff (recipient of three LASmoons)
Forest Inventory and Remote Sensing
Georg-August-Universität Göttingen, GERMANY

Background:
Forest inventories are the backbone of forest management in Germany. In most federal forestry administrations in Germany, they are performed every ten years in order to assure that logging activities are sustainable. The process involves trained foresters who visit each stand (i.e. an area where the forest is similar in terms of age structure and tree species) and perform angle count sampling as developed by Walter Bitterlich in 1984. In a second step the annual growth is calculated using yield tables and finally a harvest volume is derived. There are three particular reasons to investigate how remote sensing can be integrated in the current inventory system:

  1. The current process does not involve random sampling of the sampling points and thus does not offer any measure of the accuracy of the data.
  2. Forest engineers hardly ever rely on the inventory data as a stand-alone basis for logging planning. Most often they rely on intuition alone and on the total volume count that they have to deliver for a wider area every year.
  3. In the last ten years, the collection of high-resolution LiDAR data has become more cost-effective and most federal agencies in Germany have access to it.

In order to be able to integrate the available remote-sensing data for forest inventories in Germany, it is important to tell apart different tree species as well as estimate their volumes.

Image may be NSFW.
Clik here to view.

Hesse is one of the most forested federal states in Germany.

Goal:
The goal of this project is to perform an object-based classification of conifer trees in Northern Hesse based on high-resolution LiDAR and multi-spectral orthophotos. The first step is to delineate the tree crowns. The second step is to perform a semi-automated classification using the spectral signature of the different conifer species.

Data:
+
 DSM (1m), DTM (1m), DSM (0.2 m) of the study area
+ Stereo images with 0.2 m resolution
+ high-resolution LiDAR data (average 10 points/m²)
+ forest inventory data
+ vector files of the individual forest stands
+ ground control points (field data)
All of this data is provided by the Hessian Forest Agency (HessenForst).

LAStools processing:
1) merge and clip the LAZ files [las2las]
2) classify ground and non-ground points [lasground]
3) remove low and high outliers [lasheight, lasnoise]
4) identify buildings within the study area [lasclassify]
5) create a normalized point cloud [lasheight]
6) create a highest-return canopy height model (CHM) [lasthin, las2dem]
7) create a pit-free (CHM) with the spike-free algorithm [las2dem]

Removing Low Noise from RIEGL’s VUX-1 UAV LiDAR flown in the Philippines

In this tutorial we are removing some “tricky” low noise from LiDAR point clouds in order to produce a high-resolution Digital Terrain Model (DTM). The data was flown above a tropical beach and mangrove area in the Philippines using a VUX-1 UAV based system from RIEGL mounted on a helicopter. The survey was done as a test flight by AB Surveying who have the capacity to fly such missions in the Philippines and who have allowed us to share this data with you for educational purposes. You can download the data (1 GB) here. It covers a popular twin beach knows as “Nacpan” near El Nido in Palawan (that we happen to have visited in 2014).

Image may be NSFW.
Clik here to view.

A typical beach fringed by coconut palms in Palawan, Philippines.

We start our usual quality check with a run of lasinfo. We add the ‘-cd’ switch to compute an average point density and the ‘-histo gps_time 1’ switch to produce a 1 second histogram for the GPS time stamps.

lasinfo -i lalutaya.laz ^
        -cd ^
        -histo gps_time 1 ^
        -odix _info -otxt

You can download the resulting lasinfo report here. It tells us that there are 118,740,310 points of type 3 (with RGB colors) with an average density of 57 last returns per square meter. The point coordinates are in the “PRS92 / Philippines 1” projection with EPSG code 3121 that is based on the “Clarke 1866” ellipsoid.

Datum Transform

We prefer to work in an UTM projection based on the “WGS 1984” ellipsoid, so we will first perform a datum transform based on the seven parameter Helmert transformation – a capacity that was recently added to LAStools. For this we first need a transform to get to geocentric or Earth-Centered Earth-Fixed (ECEF) coordinates on the current “Clarke 1866” ellipsoid, then we apply the Helmert transformation that operates on geocentric coordinates and whose parameters are listed in the TOWGS84 string of EPSG code 3121 to get to geocentric or ECEF coordinates on the “WGS 1984” ellipsoid. Finally we can convert the coordinates to the respective UTM zone. These three calls to las2las accomplish this.

las2las -i lalutaya.laz ^
        -remove_all_vlrs ^
        -epsg 3121 ^
        -target_ecef ^
        -odix _ecef_clark1866 -olaz

las2las -i lalutaya_ecef_clark1866.laz ^
        -transform_helmert -127.62,-67.24,-47.04,-3.068,4.903,1.578,-1.06 ^
        -wgs84 -ecef ^
        -ocut 10 -odix _wgs84 -olaz
 
las2las -i lalutaya_ecef_wgs84.laz ^
        -target_utm auto ^
        -ocut 11 -odix _utm -olaz

In these steps we implicitly reduced the resolution that each coordinate was stored with from quarter-millimeters (i.e. scale factors of 0.00025) to the default of centimeters (i.e. scale factors of 0.01) which should be sufficient for subsequent vegetation analysis. The LAZ files also compress better when coordinates a lower resolution so that the ‘lalutaya_utm.laz’ file is over 200 MB smaller than the original ‘lalutaya.laz’ file. The quantization error this introduces is probably still below the overall scanning error of this helicopter survey.

Flightline Recovery

Playing back the file visually with lasview suggests that it contains more than one flightline. Unfortunately the point source ID field of the file is not properly populated with flightline information. However, when scrutinizing the GPS time stamp histogram in the lasinfo report we can see an occasional gap. We highlight two of these gaps in red between GPS second 540226 and 540272 and GPS second 540497 and 540556 in this excerpt from the lasinfo report:

gps_time histogram with bin size 1
[...]
 bin 540224 has 125768
 bin 540225 has 116372
 bin 540226 has 2707
 bin 540272 has 159429
 bin 540273 has 272049
 bin 540274 has 280237
[...]
 bin 540495 has 187103
 bin 540496 has 180421
 bin 540497 has 126835
 bin 540556 has 228503
 bin 540557 has 275025
 bin 540558 has 273861
[...]

We can use lasplit to recover the original flightlines based on gaps in the continuity of GPS time stamps that are bigger than 10 seconds:

lassplit -i lalutaya_utm.laz ^
         -recover_flightlines_interval 10 ^
         -odir strips_raw -o lalutaya.laz

This operation splits the points into 11 separate flightlines. The points within each flightline are stored in the order that the vendor software – which was RiPROCESS 1.7.2 from RIEGL according to the lasinfo report – had written them to file. We can use lassort to bring them back into the order they were acquired in by sorting first on the GPS time stamp and then on the return number field:

lassort -i strips_raw\*.laz ^
        -gps_time -return_number ^
        -odir strips_sorted -olaz ^
        -cores 4

Now we turn the sorted flightlines into tiles (with buffers !!!) for further processing. We also erase the current classification of the data into ground (2) and medium vegetation (4) as a quick visual inspection with lasview immediately shows that those are not correct:

lastile -i strips_sorted\*.laz ^
        -files_are_flightlines ^
        -set_classification 0 ^
        -tile_size 250 -buffer 30 -flag_as_withheld ^
        -odir tiles_raw -o lalu.laz

Quality Checking

Next comes the standard check of flightline overlap and alignment check with lasoverlap. Once more it become clear why it is so important to have flightline information. Without we may have missed what we are about to notice. We create false color images load into Google Earth to visually assess the situation. We map all absolute differences between flightlines below 5 cm to white and all absolute differences above 30 cm to saturated red (positive) or blue (negative) with a gradual shading from white to red or blue for any differences in between. We also create an overview KML file that lets us quickly see in which tile we can find the points for a particular area of interest with lasboundary.

lasoverlap -i tiles_raw\*.laz ^
           -step 1 -min_diff 0.05 -max_diff 0.30 ^
           -odir quality -opng ^
           -cores 4

lasboundary -i tiles_raw\*.laz ^
            -use_tile_bb -overview -labels ^
            -o quality\overview.kml

The resulting visualizations show (a) that our datum transform to the WGS84 ellipsoid worked because the imagery aligns nicely with Google Earth and (b) that there are several issues in the data that require further scrutiny.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

In general the data seems well aligned (most open areas are white) but there are blue and red lines crossing the survey area. With lasview have a closer look at the visible blue lines running along the beach in tile ‘lalu_765000_1252750.laz’ by repeatedly pressing ‘x’ to select a different subset and ‘x’ again to view this subset up close while pressing ‘c’ to color it differently:

lasview -i tiles_raw\lalu_765000_1252750.laz
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

These lines of erroneous points do not only happen along the beach but also in the middle of and below the vegetation as can be seen below:

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Our initial hope was to use the higher than usual intensity of these erroneous points to reclassify them to some classification code that we would them exclude from further processing. Visually we found that a reasonable cut-off value for this tile would be an intensity above 35000:

lasview -i tiles_raw\lalu_765000_1252750.laz ^
        -keep_intensity_above 35000 ^
        -filtered_transform ^
        -set_classification 23
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

However, while this method seems successful on the tile shown above it fails miserably on others such as ‘lalu_764250_1251500.laz’ where large parts of the beach are very reflective and result in high intensity returns to to the dry white sand:

lasview -i tiles_raw\lalu_764250_1251500.laz ^
        -keep_intensity_above 35000 ^
        -filtered_transform ^
        -set_classification 23
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Low Noise Removal

In the following we describe a workflow that can remove the erroneous points below the ground so that we can at least construct a high-quality DTM from the data. This will not, however, remove the erroneous points above the ground so a subsequent vegetation analysis would still be affected. Our approach is based on two obervations (a) the erroneous points affect only a relatively small area and (b) different flightlines have their erroneous points in different areas. The idea is to compute a set of coarser ground points separately for each flightline and – when combining them in the end – to pick higher ground points over lower ones. The combined points should then define a surface that is above the erroneous below-ground points so that we can mark them with lasheight as not to be used for the actual ground classification done thereafter.

The new huge_las_file_extract_ground_points_only.bat example batch script that you can download here does all the work needed to compute a set of coarser ground points for each flightline. Simply edit the file such that the LAStools variable points to your LAStools\bin folder and rename it to end with the *.bat extension. Then run:

huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000001.laz strips_ground_only\lalutaya_0000001.laz
huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000002.laz strips_ground_only\lalutaya_0000001.laz
huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000003.laz strips_ground_only\lalutaya_0000001.laz
...
huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000009.laz strips_ground_only\lalutaya_0000009.laz
huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000010.laz strips_ground_only\lalutaya_0000010.laz
huge_las_file_extract_ground_points_only strips_sorted\lalutaya_0000011.laz strips_ground_only\lalutaya_0000011.laz

The details on how this batch script works – a pretty standard tile-based multi-core processing workflow – are given as comments in this batch script. Now we have a set of individual ground points computed separately for each flightline and some will include erroneous points below the ground that the lasground algorithm by its very nature is likely to latch on to as you can see here:

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The trick is now to utilize the redundancy of multiple scans per area and – when combining flightlines – to pick higher rather than lower ground points in overlap areas by using the ground point closest to the 75th elevation percentile per 2 meter by 2 meter area with at least 3 or more points with lasthin:

lasthin -i strips_ground_only\*.laz -merged ^
        -step 2 -percentile 75 3 ^
        -o lalutaya_ground_only_2m_75_3.laz

There are still some non-ground points in the result as ground-classifying of flightlines individually often results in vegetation returns being included in sparse areas along the edges of the flight lines but we can easily get rid of those:

lasground_new -i lalutaya__ground_only_2m_75_3.laz ^
              -town -hyper_fine ^
              -odix _g -olaz

We sort the remaining ground points into a space-filling curve order with lassort and spatially index them with lasindex so they can be efficiently accessed by lasheight in the next step.

lassort -i lalutaya__ground_only_2m_75_3_g.laz ^
        -keep_class 2 ^
        -o lalutaya_ground.laz

lasindex -i lalutaya_ground.laz

Finally we have the means to robustly remove the erroneous points below the ground from all tiles. We use lasheight with the ground points we’ve just so painstakingly computed to classify all points 20 cm or more below the ground surface they define into classification code 23. Later we simply can ignore this classification code during processing:

lasheight -i tiles_raw\*.laz ^
          -ground_points lalutaya_ground.laz ^
          -do_not_store_in_user_data ^
          -classify_below -0.2 23 ^
          -odir tiles_cleaned -olaz ^
          -cores 4
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Rather than trying to ground classify all remaining points we run lasground on a thinned subset of all points. For this we mark the lowest point in every 20 cm by 20 cm grid cell with some temporary classification code such as 6.

lasthin -i tiles_cleaned\*.laz ^
        -ignore_class 23 ^
        -step 0.20 -lowest -classify_as 6 ^
        -odir tiles_thinned -olaz ^
        -cores 4

Finally we can run lasground to compute the ground classification considering all points with classification code 6 by ignoring all points with classification codes 23 and 0.

lasground_new -i tiles_thinned\*.laz ^
              -ignore_class 23 0 ^
              -city -hyper_fine ^
              -odir tiles_ground_new -olaz ^
              -cores 4

And finally we can create a DTM with a resolution of 25 cm using las2dem and the result is truly beautiful:

las2dem -i tiles_ground_new\*.laz ^
        -keep_class 2 ^
        -step 0.25 -use_tile_bb ^
        -odir tiles_dtm_25cm -obil ^
        -cores 4

Image may be NSFW.
Clik here to view.

We have to admit that a few bumps are left (see mouse cursor below) but adjusting the parameters presented here is left as an exercise to the reader.

Image may be NSFW.
Clik here to view.

We would again like to acknowledge AB Surveying whose generosity has made this blog article possible. They have the capacity to fly such missions in the Philippines and who have allowed us to share this data with you for educational purposes.

First Look with LAStools at LiDAR from Hovermap Drone by CSIRO

Last December we had a chance to visit the team of Dr. Stefan Hrabar at CSIRO in Pullenvale near Brisbane who work on a drone LiDAR system called Hovermap. This SLAM-based system is mainly developed for the purpose of autonomous flight and exploration of GPS-denied environments such as buildings, mines and tunnels. But as the SLAM algorithm continuously self-registers the scan lines it produces a LiDAR point cloud that in itself is a nice product. We started our visit with a short test flight around the on-site tower. You can download the LiDAR data and the drone trajectory of this little survey here:

The Hovermap system is based on the Velodyne Puck Lite (VLP-16) that is much cheaper and more light-weight than many other LiDAR systems. One interesting tidbit in the Hovermap setup is that the scanner is installed such that the entire Puck is constantly rotating as you can see in this video. But  the Velodyne Puck is also known to produce somewhat “fluffy” surfaces with a thickness of a few centimeters. In a previous blog post with data from the YellowScan Surveyor system (that is also based on the Puck) we used a “median ground” surface to deal with the “fluff”. In the following we will have a look at the LiDAR data produced by Hovermap and how to process it with LAStools.

Image may be NSFW.
Clik here to view.

LiDAR data of CSIRO tower acquired during test flight of Hovermap system.

As always we start with a lasinfo report that computes the average density ‘-cd’ and histograms for the intensity and the GPS time:

lasinfo -i CSIRO_Tower\results.laz ^
        -cd ^
        -histo intensity 16 -histo gps_time 2 ^
        -odir CSIRO_Tower\quality -odix _info -otxt

A few excerpts of the resulting lasinfo report that you can download here are below:

lasinfo (180409) report for 'CSIRO_Tower\results.laz'
[...]
 number of point records: 16668904
 number of points by return: 0 0 0 0 0
 scale factor x y z: 0.0001 0.0001 0.0001
 offset x y z: -5.919576153930379 22.785394470724583 9.535698734939086
 min x y z: -138.6437 -125.2552 -34.1510
 max x y z: 126.8046 170.8260 53.2224
WARNING: full resolution of min_x not compatible with x_offset and x_scale_factor: -138.64370561381907
WARNING: full resolution of min_y not compatible with y_offset and y_scale_factor: -125.25518631070418
WARNING: full resolution of min_z not compatible with z_offset and z_scale_factor: -34.150966206894068
WARNING: full resolution of max_x not compatible with x_offset and x_scale_factor: 126.80455330595831
WARNING: full resolution of max_y not compatible with y_offset and y_scale_factor: 170.82597525215334
WARNING: full resolution of max_z not compatible with z_offset and z_scale_factor: -34.150966206894068
[...]
 gps_time 121.288045 302.983110
WARNING: 2 points outside of header bounding box
[...]
covered area in square units/kilounits: 51576/0.05
point density: all returns 323.19 last only 318.40 (per square units)
 spacing: all returns 0.06 last only 0.06 (in units)
WARNING: for return 1 real number of points by return is 16424496 but header entry was not set.
WARNING: for return 2 real number of points by return is 244408 but header entry was not set.
[...]
real max z larger than header max z by 0.000035
real min z smaller than header min z by 0.000035
[...]

Most of these warnings have to do with poorly chosen offset values in the LAS header that have many decimal digits instead of being nice round numbers. The points are stored with sub-millimeter resolution (scale factors of 0.0001) which is unnecessarily precise for a UAV flying a Velodyne Puck where the overall system error can be expected to be on the order of a few centimeters. Also the histogram of return numbers in the LAS header was not populated. We can fix these issues with one call to las2las:

las2las -i CSIRO_Tower\results.laz ^
        -rescale 0.01 0.01 0.01 ^
        -auto_reoffset ^
        -odix _fixed -olaz

If you create another lasinfo report on the fixed file you will see that all the warnings have gone. The file size is now also only 102 MB instead of 142 MB because centimeter coordinate compress much better than sub-millimeter coordinates.

The average density of 318 last return per square meter reported by lasinfo is not that useful for a UAV survey because it does account for the highly varying distribution of LiDAR returns in the area surveyed. With lasgrid we can get a much more clear picture of that.

lasgrid -i CSIRO_Tower\results_fixed.laz ^
        -last_only ^
        -step 0.5 -use_bb -density ^
        -false -set_min_max 0 1500 ^
        -o CSIRO_Tower\quality\density_0_1500.png
Image may be NSFW.
Clik here to view.

LiDAR density: blue is close to zero and red is 1500 or more last returns / sqr mtr

The red dot in the point density indicated an area with over 1500 last returns per square meter. No surprise that this is the take-off and touch-down location of the copter drone. Naturally this spot is completely over-scanned compared to the rest of the area. We can remove these points with the help of the timestamps by cutting off the start and the end of the recording.

The total recording time including take-off, flight around the tower, and touch-down was around 180 seconds or 3 minutes as the lasinfo report tells us. Note that the recorded time stamps are neither “GPS Week Time” nor “Adjusted Standard GPS Time” but an internal system time. By visualizing the trajectory of the UAV with lasview while binning the timestamps into the intensity field we can easily determine what interval of timestamps describes the actual survey flight. First we convert the drone trajectory from the textual ASCII format to the LAZ format with txt2las:

txt2las -i CSIRO_Tower\results_traj.txt ^
        -skip 1 ^
        -parse txyz ^
        -set_classification 12 ^
        -olaz

lasview -i CSIRO_Tower\results_traj.laz ^
        -bin_gps_time_into_intensity 1
Image may be NSFW.
Clik here to view.

Binning timestamps into intensity allows visually determining start and end of survey.

Using lasview and pressing <i> while hovering over those points of the trajectory that appear to be the survey start and end we determine visually that the timestamps between 164 to 264 correspond to the actual survey flight over the area of interest with the take-off and touch-down maneuvers excluded. We use las2las to cut out the relevant part and re-run lasgrid:

las2las -i CSIRO_Tower\results_fixed.laz ^
        -keep_gps_time 164 264 ^
        -o CSIRO_Tower\results_survey.laz

lasgrid -i CSIRO_Tower\results_survey.laz ^
        -last_only ^
        -step 0.5 -use_bb -density ^
        -false -set_min_max 0 1500 ^
        -o CSIRO_Tower\quality\density_0_1500_survey.png
Image may be NSFW.
Clik here to view.

LiDAR density after removing take-off and touch-down maneuvers.

The other set of point we are less interested in are those occasional hits far from the scanner that sample the area too sparsely to be useful for anything. We use lastrack to reclassify points as noise (7) that exceed a x/y distance of 50 meters from the trajectory and then use lasgrid to create another density image without the points classified as noise..

lastrack -i CSIRO_Tower\results_survey.laz ^
         -track CSIRO_Tower\results_traj.laz ^
         -classify_xy_range_between 50 1000 7 ^
         -o CSIRO_Tower\results_xy50.laz

lasgrid -i CSIRO_Tower\results_xy50.laz ^
        -last_only -keep_class 0 ^
        -step 0.5 -use_bb -density ^
        -false -set_min_max 0 1500 ^
        -o CSIRO_Tower\quality\density_0_1500_xy50.png
Image may be NSFW.
Clik here to view.

LiDAR density after removing returns farther than 50 m from trajectory.

We process the remaining points using a typical tile-based processing pipeline. First we run lastile to create tiling of 200 meter by 200 meter tiles with 20 buffers while dropping the noise points::

lastile -i CSIRO_Tower\results_xy50.laz ^
        -drop_class 7 ^
        -tile_size 200 -buffer 20 -flag_as_withheld ^
        -odir CSIRO_Tower\tiles_raw -o eta.laz

Because of the high sampling we expect there to be quite a few duplicate point where all three coordinate x, y, and z are identical. We remove them with a call to lasduplicate:

lasduplicate -i CSIRO_Tower\tiles_raw\*.laz ^
             -unique_xyz ^
             -odir CSIRO_Tower\tiles_unique -olaz ^
             -cores 4

This removes between 12 to 25 thousand point from each tile. Then we use lasnoise to classify isolated points as noise:

lasnoise -i CSIRO_Tower\tiles_unique\*.laz ^
         -step_xy 0.5 -step_z 0.1 -isolated 5 ^
         -odir CSIRO_Tower\tiles_denoised_temp -olaz ^
         -cores 4
Image may be NSFW.
Clik here to view.

Aggressive parameters assure most noise point below ground are found.

This classifies between 13 to 23 thousand point from each tile into the noise classification code 7. We use rather aggressive settings to make sure we get most of the noise points that are below the terrain. Getting a correct ground classification in the next few steps is the main concern now even if this means that many points above the terrain on wires, towers, or vegetation will also get miss-classified as noise (at least temporarily). Next we use lasthin to classify a subset of points with classification code 8 on which we will then run the ground classification. We classify each point that is closest to the 5th percentile in elevation per 25 cm by 25 cm grid cell given there are at least 20 non-noise points in a cell. We then repeat this while increasing the cell size to 50 cm by 50 cm and 100 cm by 100 cm.

lasthin -i CSIRO_Tower\tiles_denoised_temp\*.laz ^
        -ignore_class 7 ^
        -step 0.25 -percentile 5 20 -classify_as 8 ^
        -odir CSIRO_Tower\tiles_thinned_025 -olaz ^
        -cores 4

lasthin -i CSIRO_Tower\tiles_thinned_025\*.laz ^
        -ignore_class 7 ^
        -step 0.50 -percentile 5 20 -classify_as 8 ^
        -odir CSIRO_Tower\tiles_thinned_050 -olaz ^
        -cores 4

lasthin -i CSIRO_Tower\tiles_thinned_025\*.laz ^
        -ignore_class 7 ^
        -step 1.00 -percentile 5 20 -classify_as 8 ^
        -odir CSIRO_Tower\tiles_thinned_100 -olaz ^
        -cores 4

 

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Then we ground classify the points that were classified into the temporary classification code 8 in the previous step using lasground.

lasground -i CSIRO_Tower\tiles_thinned_100\*.laz ^
          -ignore_class 7 0 ^
          -town -ultra_fine ^
          -odir CSIRO_Tower\tiles_ground -olaz ^
          -cores 4

The resulting ground points are a lower envelope of the “fluffy” sampled surfaces produced by the Velodyne Puck scanner. We use lasheight to thicken the ground by moving all points between 1 cm below and 6 cm above the TIN of these “low ground” points to a temporary classification code 6 representing a “thick ground”. We also undo the overly aggressive noise classifications above the ground by setting all higher points back to classification code 1 (unclassified).

lasheight -i CSIRO_Tower\tiles_ground\*.laz ^
          -classify_between -0.01 0.06 6 ^
          -classify_above 0.06 1 ^
          -odir CSIRO_Tower\tiles_ground_thick -olaz ^
          -cores 4
Image may be NSFW.
Clik here to view.

Profile view for 25 centimeter wide strip of open terrain. Top: Green points are low ground. Orange points are thickened ground with 5 cm drop lines. Middle: Brown points are median ground computed from thick ground. Bottom: Comparing low ground points (in green) with median ground points (in brown).

From the “thick ground” we then compute a “median ground” using lasthin in a similar fashion as we used it before. A profile view for a 25 centimeter wide strip of open terrain illustrates the workflow of going from “low ground” via “thick ground” to “median ground” and shows the slight difference in elevation between the two.

lasthin -i CSIRO_Tower\tiles_ground_thick\*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.25 -percentile 50 10 -classify_as 2 ^
        -odir CSIRO_Tower\tiles_ground_median_025 -olaz ^
        -cores 4

lasthin -i CSIRO_Tower\tiles_ground_median_025\*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.50 -percentile 50 10 -classify_as 2 ^
        -odir CSIRO_Tower\tiles_ground_median_050 -olaz ^
        -cores 4

lasthin -i CSIRO_Tower\tiles_ground_median_050\*.laz ^
        -ignore_class 0 1 7 ^
        -step 1.00 -percentile 50 10 -classify_as 2 ^
        -odir CSIRO_Tower\tiles_ground_median_100 -olaz ^
        -cores 4

Then we use lasnoise once more with more conservative settings to remove the noise points that are sprinkled around the scene.

lasnoise -i CSIRO_Tower\tiles_ground_median_100\*.laz ^
         -step_xy 1.0 -step_z 1.0 -isolated 5 ^
         -odir CSIRO_Tower\tiles_denoised -olaz ^
         -cores 4

While we classify the scene into building roofs, vegetation, and everything else with lasclassify we also move all (unused) classifications to classification code 1 (unclassified). You may play with the parameters of lasclassify (see README) to achieve better a building classification. However, those buildings the laser can peek into (either via a window or because they are gazebo-like structures) will not be classified correctly. unless you remove the points that are under the roof somehow.

lasclassify -i CSIRO_Tower\tiles_denoised\*.laz ^
            -ignore_class 7 ^
            -change_classification_from_to 0 1 ^
            -change_classification_from_to 6 1 ^
            -step 1 ^
            -odir CSIRO_Tower\tiles_classified -olaz ^
            -cores 4

A glimpse at the final classification result is below. A hillshaded DTM and a strip of classified points. Of course the tower was miss-classified as vegetation given that it looks just like a tree to the logic used in lasclassify.

Image may be NSFW.
Clik here to view.

The hillshaded DTM with a strip of classified points.

Finally we remove the tile buffers (that were really important for tile-based processing) with lastile:

lastile -i CSIRO_Tower\tiles_classified\*.laz ^
        -remove_buffer ^
        -odir CSIRO_Tower\tiles_final -olaz ^
        -cores 4

And publish the LiDAR point cloud as version 1.6 of Potree using laspublish:

laspublish -i CSIRO_Tower\tiles_final\*.laz ^
           -i CSIRO_Tower\results_traj.laz ^
           -only_3D -elevation -overwrite -potree16 ^
           -title "CSIRO Tower" ^
           -description "HoverMap test flight, 18 Dec 2017" ^
           -odir CSIRO_Tower\tiles_portal -o portal.html -olaz

Note that we also added the trajectory of the drone because it looks nice and gives a nice illustration of how the UAV was scanning the scene.

Image may be NSFW.
Clik here to view.

Via Potree we can publish and explore the final point cloud using any modern Web browser.

We would like to thank the entire team around Dr. Stefan Hrabar for taking time out of their busy schedules just a few days before Christmas.

LASmoons: Martin Buchauer

Martin Buchauer (recipient of three LASmoons)
Cartography & Geomedia Technology
University of Applied Science Munich, GERMANY

Background:
Salt marsh areas provide numerous services such as natural flood defenses, carbon sequestration, agricultural services, and are a valuable coastal habitat for flora, fauna and humans. However, they are not only threatened by the constant rise of sea levels caused by global warming but also by human settlement in coastal areas. A sensible local coastal development is important as it may help to support the development and progression of stressed salt marshes.

Image may be NSFW.
Clik here to view.

Looking South you can see the salt marsh area next to a famous golf course with St Andrews in the background.

Goal:

This research aims to visualize and extract vegetation metrics as well as the temporal analysis of four salt marsh data sets which are derived from terrestrial laser scanning. Located at the South and North shore of the Eden Estuary near St Andrews, Scotland, the scans were acquired in the summer and winter of 2016. Ground based laser scanning is an ideal method of fully reconstructing vegetation structures as well as having the ability to retrieve meaningful metrics such as height, area, and vegetation density. Although this technology has frequently been applied in the area of forestry, its application to salt marsh areas has not yet fully explored.

Data:
+
 TLS data acquired with a Leica HDS6100 (average density of 38000 points/m²)
+ ground control points (field data)

LAStools processing:
1) check the quality of the LiDAR data [lasinfo, lasoverlap, lasgrid]
2) merge and retile the original data with buffers [lastile]
3) classify point clouds into ground and non-ground [lasthin, lasground]
4) create digital terrain (DTM) and digital surface models (DSM) [lasthin, las2dem, blast2dem]


New Step-by-Step Tutorial for Velodyne Drone LiDAR from Snoopy by LidarUSA

The folks from Harris Aerial gave us LiDAR data from a test-flight of one of their drones, the Carrier H4 Hybrid HE (with a 5kg maximum payload and a retail price of US$ 28,000), carrying a Snoopy A series LiDAR system from LidarUSA in the countryside near Huntsville, Alabama. The laser scanner used by the Snoopy A series is a Velodyne HDL 32E that has 32 different laser/detector pairs that fire in succession to scan up to 700,000 points per second within a range of 1 to 70 meters. You can download the raw LiDAR file from the 80 second test flight here. As always, the first thing we do is to visualize the file with lasview and to generate a textual report of its contents with lasinfo.

lasview -i Velodyne001.laz -set_min_max 680 750

It becomes obvious that the drone must have scanned parts of itself (probably the landing gear) during the flight and we exploit this fact in the later processing. The information which of the 32 lasers was collecting which point is stored into the ‘point source ID’ field which is usually used for the flightline information. This results in a psychedelic look in lasview as those 32 different numbers get mapped to the 8 different colors that lasview uses for distinguishing flightlines.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The lasinfo report we generate computes the average point density with ‘-cd’ and includes histograms for a number of point attributes, namely for ‘user data’, ‘intensity’, ‘point source ID’, ‘GPS time’, and ‘scan angle rank’.

lasinfo -i Velodyne001.laz ^
        -cd ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -histo intensity 16 ^
        -histo gps_time 1 ^
        -histo scan_angle_rank 5 ^
        -odir quality -odix _info -otxt

You can download the resulting report here and it will tell you that the information which of the 32 lasers was collecting which point was stored both into the ‘user data’ field and into the ‘point source ID’ field. The warnings you see below have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millimeter (or millifeet) resolution as all three scale factors are 0.001 (meaning three decimal digits).

WARNING: stored resolution of min_x not compatible with x_offset and x_scale_factor: 2171988.6475160527
WARNING: stored resolution of min_y not compatible with y_offset and y_scale_factor: 1622812.606925504
WARNING: stored resolution of min_z not compatible with z_offset and z_scale_factor: 666.63504345017589
WARNING: stored resolution of max_x not compatible with x_offset and x_scale_factor: 2172938.973065129
WARNING: stored resolution of max_y not compatible with y_offset and y_scale_factor: 1623607.5209975131
WARNING: stored resolution of max_z not compatible with z_offset and z_scale_factor: 1053.092674726669

Both the “return number” and the “number of returns” attribute of every points in the file is 2. This makes it appear as if the file would only contain the last returns of those laser shots that produced two returns. However, as the Velodyne HDL 32E only produces one return per shot we can safely conclude that those numbers should all be 1 instead of 2 and that this is just a small bug in the export software. We can easily fix this with las2las.

reporting minimum and maximum for all LAS point record entries ...
[...]
 return_number 2 2
 number_of_returns 2 2
[...]

The lasinfo report lacks information about the coordinate reference system as there is no VLR that stores projection information. Harris Aerial could not help us other than telling us that the scan was near Huntsville, Alamaba. Measuring certain distances in the scene like the height of the house or the tree suggests that both horizontal and vertical units are in feet, or rather in US survey feet. After some experimenting we find that using EPSG 26930 for NAD83 Alabama West but forcing the default horizontal units from meters to US survey feet gives a result that aligns well with high-resolution Google Earth imagery as you can see below:

lasgrid -i flightline1.laz ^
        -i flightline2.laz ^
        -merged ^
        -epsg 26930 -survey_feet ^
        -step 1 -highest ^
        -false -set_min_max 680 750 ^
        -o testing26930usft.png
Image may be NSFW.
Clik here to view.

Using EPSG code 26930 but with US survey feet instead of meters results in nice alignment with GE imagery.

We use the fact that the drone has scanned itself to extract an (approximate) trajectory by isolating those LiDAR returns that have hit the drone. Via a visual check with lasview (by hovering with the cursor over the lowest drone hits and pressing hotkey ‘i’) we determine that the lowest drone hits are all above 719 feet. We use two calls to las2las to split the point cloud vertically. In the same call we also change the resolution from three to two decimal digits, fix the return number issue, and add the missing geo-referencing information:

las2las -i Velodyne001.laz ^
        -rescale 0.01 0.01 0.01 ^
        -epsg 26930 -survey_feet -elevation_survey_feet ^
        -set_return_number 1 ^
        -set_number_of_returns 1 ^
        -keep_z_above 719 ^
        -odix _above719 -olaz

las2las -i Velodyne001.laz ^
        -rescale 0.01 0.01 0.01 ^
        -epsg 26930 -survey_feet -elevation_survey_feet ^
        -set_return_number 1 ^
        -set_number_of_returns 1 ^
        -keep_z_below 719 ^
        -odix _below719 -olaz

We then use the manual editing capabilities of lasview to change the classifications of the LiDAR points that correspond to drone hits from 1 to 12, which is illustrated by the series of screen shots below.

lasview -i Velodyne001_above719.laz
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The workflow illustrated above results in a tiny LAY file that is part of the LASlayers functionality of LAStools. It only encodes the few changes in classifications that we’ve made to the LAZ file without re-writing those parts that have not changed. Those interested may use laslayers to inspect the structure of the LAY file:

laslayers -i Velodyne001_above719.laz

We can apply the LAY file on-the-fly with the ‘-ilay’ option, for example, when running lasview:

lasview -i Velodyne001_above719.laz -ilay

To separate the drone-hit trajectory from the remaining points we run lassplit with the ‘-ilay’ option and request to split by classification with this command line:

lassplit -i Velodyne001_above719.laz -ilay ^
         -by_classification -digits 3 ^
         -olaz

This gives us two new files with the three-digit appendices ‘_001’ and ‘_012’. The latter one contains those points we marked as being part of the trajectory. We now want to use lasview to – visually – find a good moment in time where to split the trajectory into multiple flightlines. The lasinfo report tells us that the GPS time stamps are in the range from 418,519 to 418,602. In order to use the same trick as we did in our recent article on processing LiDAR data from the Hovermap Drone, where we mapped the GPS time to the intensity for querying it via lasview, we first need to subtract a large number from the GPS time stamps to bring them into a suitable range that fits the intensity field as done here.

lasview -i Velodyne001_above719_012.laz ^
        -translate_gps_time -418000 ^
        -bin_gps_time_into_intensity 1
        -steps 5000

The ‘-steps 5000’ argument makes for a slower playback (press ‘p’ to repeat) to better follow the trajectory.

Image may be NSFW.
Clik here to view.
Hovering with the mouse over a point that – visually – seems to be one of the turning points of the drone and pressing ‘i’ on the keyboard shows an intensity value of 548 which corresponds to the GPS time stamp 418548, which we then use to split the LiDAR point cloud (without the trajectory) into two flightlines:

las2las -i Velodyne001_below719.laz ^
        -i Velodyne001_above719_001.laz ^
        -merged ^
        -keep_gps_time_below 418548 ^
        -o flightline1.laz

las2las -i Velodyne001_below719.laz ^
        -i Velodyne001_above719_001.laz ^
        -merged ^
        -keep_gps_time_above 418548 ^
        -o flightline2.laz

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by both flightlines. Differences of less than a quarter of a foot are mapped to white, differences of more than half a foot are mapped to saturated red or blue depending on whether the difference is positive or negative:

lasoverlap -i flightline1.laz ^
           -i flightline2.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir quality -o overlap_025_050.png

We then use a new feature of the LAStools GUI (as of version 180429) to closer inspect larger red or blue areas. We want to use lasmerge and clip out any region that looks suspect for closer examination with lasview. We start the tool in the GUI mode with the ‘-gui’ command and the two flightlines pre-loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasmerge -i flightline1.laz -i flightline2.laz -gui
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

You can also clip out these three areas using the command lines below:

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172500 1623160 2172600 1623165 ^
         -o clip2500_3160_100x005.laz

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172450 1623450 2172550 1623455 ^
         -o clip2450_3450_100x005.laz

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172430 1623290 2172530 1623310 ^
         -o clip2430_3290_100x020.laz

A closer inspection of the three cut out slices explains the red and blue areas in the difference image created by lasoverlap. We find a small systematic error in two of the slices. In slice ‘clip2500_3160_100x005.laz‘ the green points from flightline 1 are on average slightly higher than the red points from flightline 2. Vice-versa in slice ‘clip2450_3450_100x005.laz‘ the green points from flightline 1 are on average slightly lower than the red points from flightline 2. However, the error is less than half a foot and only happens near the edges of the flightlines. Given that our surfaces are expected to be “fluffy” anyways (as is typical for Velodyne LiDAR systems), we accept these differences and continue processing.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The strong red and blue colors in the center of the difference image created by lasoverlap is easily explained by looking at slice ‘clip2430_3290_100x020.laz‘. The scanner was “looking” under a gazebo-like open roof structure from two different directions and therefore always seeing parts of the floor in one flightline that were obscured by the roof in the other.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

While working with this data we’ve also implemented a new feature for lastrack that computes the 3D distance between LiDAR points and the trajectory and allows storing the result as an additional per point attribute with extra bytes. Those can then be visualized with lasgrid. Here an example:

lastrack -i flightline1.laz ^
         -i flightline2.laz ^
         -track Velodyne001_above719_012.laz ^
         -store_xyz_range_as_extra_bytes ^
         -odix _xyz_range -olaz ^
         =cores 2

lasgrid -i flightline*_xyz_range.laz -merged ^
        -drop_attribute_below 0 1 ^
        -attribute0 -lowest ^
        -false -set_min_max 20 200 ^
        -o quality/closest_xyz_range_020ft_200ft.png

lasgrid -i flightline*_xyz_range.laz -merged ^
        -drop_attribute_below 0 1 ^
        -attribute0 -highest ^
        -false -set_min_max 30 300 ^
        -o quality/farthest_xyz_range_030ft_300ft.png
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Below the complete processing pipeline for creating a median ground model from the “fluffy” Velodyne LiDAR data that results in the hillshaded DTM shown here. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan.

Image may be NSFW.
Clik here to view.

Hillshaded DTM with a resolution of 1 foot generated via a median ground computation by the LAStools processing pipeline detailed below.

lastile -i flightline1.laz ^
        -i flightline2.laz ^
        -faf ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir tiles_raw -o somer.laz

lasnoise -i tiles_raw\*.laz ^
         -step_xy 2 -step 1 -isolated 9 ^
         -odir tiles_denoised -olaz ^
          -cores 4

lasthin -i tiles_denoised\*.laz ^
        -ignore_class 7 ^
        -step 1 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_1_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_1_foot\*.laz ^
        -ignore_class 7 ^
        -step 2 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_2_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_2_foot\*.laz ^
        -ignore_class 7 ^
        -step 4 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_4_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_4_foot\*.laz ^
        -ignore_class 7 ^
        -step 8 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_8_foot -olaz ^
        -cores 4

lasground -i tiles_thinned_8_foot\*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine ^
          -odir tiles_ground_lowest -olaz ^
          -cores 4

lasheight -i tiles_ground_lowest\*.laz ^
          -classify_between -0.05 0.5 6 ^
          -odir tiles_ground_thick -olaz ^
          -cores 4

lasthin -i tiles_ground_thick\*.laz ^
        -ignore_class 1 7 ^
        -step 1 -percentile 0.5 -classify_as 2 ^
        -odir tiles_ground_median -olaz ^
        -cores 4

las2dem -i tiles_ground_median\*.laz ^
        -keep_class 2 ^
        -step 1 -use_tile_bb ^
        -odir tiles_dtm -obil ^
        -cores 4

blast2dem -i tiles_dtm\*.bil -merged ^
          -step 1 -hillshade ^
          -o dtm_hillshaded.png

We thank Harris Aerial for sharing this LiDAR data set with us flown by their Carrier H4 Hybrid HE drone carrying a Snoopy A series LiDAR system from LidarUSA.

LASmoons: Alex S. Olpenda

Alex S. Olpenda (recipient of three LASmoons)
Department of Geomatics and Spatial Planning, Faculty of Forestry
Warsaw University of Life Sciences, POLAND

Background:
The Bialowieza Forest is a trans-boundary property along the borders of Poland and Belarus consisting of diverse Central European lowland forest covering a total area of 141,885 hectares. Enlisted as one of the world’s biosphere reserves and a UNESCO World Heritage Site, the Bialowieza Forest conserves a complex ecosystem that supports vast wildlife including at least 250 species of birds and more than 50 mammals such as wolf, moose, lynx and the largest free-roaming population of the forest’s iconic species, the European bison [1]. The area is also significantly rich in dead wood which becomes a home for countless species of mushrooms, mold, bacteria and insects of which many of them are endangered of extinction [2]. Another factor, aside from soil type, that impacts the species of plant communities growing in the area is humidity [3] which can be considered as a function of solar radiation. Understanding the interactions and dynamics of these elements within the environment is vital for proper management and conservation practices. Sunlight below canopies is a driving force that affects the growth and survival of both fauna and flora directly and indirectly. Measurement and monitoring of this variable is crucial.

Goal:
Remote sensing technology can describe the light condition inside the forest with relatively high spatial and temporal resolutions at large scale. The goal of this research is to develop a predictive model to estimate sub-canopy light condition of Bialowieza Forest inside Poland’s territory using LiDAR data. Aside from common metrics based on heights and intensities, extraction of selected metrics known to infer transmitted light are also to be done. Returns that belong or are close to the ground are a good substitute for sun-rays that reach the forest floor while vegetation-classified returns could be assumed as the ones impeding the light. Relationships between these metrics and to both direct and diffuse sunlight derived from hemispherical photographs will be explored. Furthermore, multiple regression shall then be conducted between the parameters. Previous similar studies have been done successfully but mostly in homogeneous forest. The task might pose a challenge as Bialowieza Forest is a mixture of conifers and broad-leaved trees.

Image may be NSFW.
Clik here to view.

Location map of the study site with 100 random sample plots.

Data:
+
2015 ALS data set obtained using full waveform sensor (Riegl LMS-Q680i)
+ discrete point clouds (average pulse density: 6 points/m²)
+ 134 flightlines with 40% overlap
+ forest inventory data (100 circular plots, 12.62 m radius)
+ colored hemispherical photographs
All of this data is provided by the Forest Research Institute through the ForBioSensing project.

LAStools processing:
1) data quality checking [lasinfo, lasoverlap, lasgrid, lasreturn]
2) merge and clip the LAZ files [las2las]
3) classify ground and non-ground points [lasground]
4) remove low and high outliers [lasheight, lasnoise]
5) create a normalized point cloud [lasheight]
6) compute forestry metrics for each plot [lascanopy]

References:
[1] UNESCO. World Heritage List. Available online (accessed on 2 October 2017).
[2] Polish Tourism Organization. Official Travel Website. Available online (accessed on 3 October 2017).
[3] Bialowieza National Park. Available online (accessed on 3 October 2017).

LASmoons: Maeva Dang

Maeva Dang (recipient of three LASmoons)
Industrial Building and interdisciplinary Planning, Faculty of Civil Engineering
Vienna University of Technology, AUSTRIA

Background:
After centuries of urbanization and industrialization the green landscape of Rio de Janeiro in Brazil must be regenerated. The forests and other green areas, providers of ecosystem services, are fragmented and surrounded by dense urban occupation [1]. The loss of vegetation in the city reduces the amount of cooling and increases the urban heat islands effect. The metropolis also has a chronic problem with floods as a result of the lack of sustainable planning in urban areas of low permeability. A well-designed green infrastructure system is highly needed, since it would help the city to mitigate the negative effects of its urbanization and to be more resilient to environmental changes [2]. Intensive green roofs provide a large range of benefits from enhancing biodiversity in the city to reducing flood risks and mitigating the urban heat islands effect. The present research aims to quantitatively and accurately assess the intensive greening potential of the roof landscape of Rio de Janeiro based on LIDAR data.

Image may be NSFW.
Clik here to view.

A view of the roof landscape of the Urca district. Rio de Janeiro has high contrasts of forests and dense urban environments.

Goal:
The LAStools software will be used to check the quality of the data and create a Digital Terrain Model (DTM) and Digital Surface Model (DSM) for the city of Rio de Janeiro. The goal of the study is to identify the existing flat roof surfaces suitable for intensive greening (i.e. that have a slope between 0 and 5 degrees). The results will be provided for free to the public.

Data:
+
 Airborne LiDAR data provided by the City hall of Rio de Janeiro, Instituto Municipal de Urbanismo Pereira Passos (IPP)
+ Average pulse density 2 pulses per square meter
+ Sensor System: Leica ALS60

LAStools processing:
1) check the quality of the LiDAR data [lasinfo, lasoverlap, lasgrid]
2) classify into ground and non-ground points using tile-based processing [lastilelasground]
3) remove low and high outliers [lasheight, lasnoise]
4) identify buildings within the study area [lasclassify]
5) normalize LiDAR heights [lasheight]
6) generate DTM and DSM [las2dem, lasgrid]

References:
[1] Herzog C. (2012). Connecting the wonderful Landscapes of Rio de Janeiro. Available online . Accessed on 07/06/18.
[2] European Commission (2011). Communication from the Commission to the European Parliament, the Council, the
Economic and Social Committee and the Committee of the Regions: Our life insurance, our natural capital: an EU
biodiversity strategy to 2020. Available online. Accessed on 07/06/18.

Complete LiDAR Processing Pipeline: from raw Flightlines to final Products

This tutorial serves as an example for a complete end-to-end workflow that starts with raw LiDAR flightlines (as they may be delivered by a vendor) to final classified LiDAR tiles and derived products such as raster DTM, DSM, and SHP files with contours, building footprint and vegetation layers. The three example flightlines we are using here were flown in Ayutthaya, Thailand with a RIEGL LMS Q680i LiDAR scanner by Asian Aerospace Services who are based at the Don Mueang airport in Bangkok from where they are serving South-East-Asia and beyond. You can download them here:

Quality Checking

The minimal quality checks consist of generating textual reports (lasinfo & lasvalidate), inspecting the data visually (lasview), making sure alignment and overlap between flightlines fulfill expectations (lasoverlap), and measuring pulse density per square meter (lasgrid). Additional checks for points replication (lasduplicate), completeness of all returns per pulse (lasreturn), and validation against external ground control points (lascontrol) may also be performed.

lasinfo -i Ayutthaya\strips_raw\*.laz ^
        -cd ^
        -histo z 5 ^
        -histo intensity 64 ^
        -odir Ayutthaya\quality -odix _info -otxt ^
        -cores 3

lasvalidate -i Ayutthaya\strips_raw\*.laz ^
            -o Ayutthaya\quality\validate.xml

The lasinfo report generated with the command line shown computes the average density for each flightline and also generates two histograms, one for the z coordinate with a bin size of 5 meter and one for the intensity with a bin size of 64. The resulting textual descriptions are output into the specified quality folder with an appendix ‘_info’ added to the original file name. Perusing these reports tells you that there are up to 7 returns per pulse, that the average pulse density per flightline is between 7.1 to 7.9 shots per square meter, that the point source IDs of the points are already populated correctly, that there are isolated points far above and far below the scanned area, and that the intensity values range from 0 to 1023 with the majority being below 400. The warnings in the lasinfo and the lasvalidate reports about the presence of return numbers 6 and 7 have to do with the history of the LAS format and can safely be ignored.

lasoverlap -i Ayutthaya\strips_raw\*.laz ^
           -files_are_flightlines ^
           -min_diff 0.1 -max_diff 0.3 ^
           -odir Ayutthaya\quality -o overlap.png

This results in two color illustrations. One image shows the flightline overlap with blue indicating one flightline, turquoise indicating two, and yellow indicating three flightlines. Note that wet areas (rivers, lakes, rice paddies, …) without LiDAR returns affect this visualization. The other image shows how well overlapping flightlines align. Their vertical difference is color coded with while meaning less than 10 cm error while saturated red and blue indicate areas with more than 30 cm positive or negative difference.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

One pixel wide red and blue along building edges and speckles of red and blue in vegetated areas are normal. We need to look-out for large systematic errors where terrain features or flightline outlines become visible. If you click yourself through this photo album you will eventually see typical examples (make sure to read the comments too). One area slightly below the center looks suspicious. We load the PNG into the GUI to pick this area for closer inspection with lasview.

lasview -i Ayutthaya\strips_raw\*.laz -gui
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Why these flightline differences exist and whether they are detrimental to your purpose are questions that you will have to explore further. For out purpose this isolated difference was noted but will not prevent us from proceeding further. Next we want to investigate the pulse density. We do this with lasgrid. We know that the average last return density per flightline is between 7.1 to 7.9 shots per square meter. We set up the false color map for lasgrid such that it is blue when the last return density drops to 5 shots (or less) per square meter and such that it is red when the last return density reaches 10 shots (or more).

lasgrid -i Ayutthaya\strips_raw\*.laz -merged ^
        -keep_last ^
        -step 2 -density ^
        -false -set_min_max 4 8 ^
        -odir Ayutthaya\quality -o density_4ppm_8ppm.png
Image may be NSFW.
Clik here to view.

The last return density per square meter mapped to a color: blue is 5 or less, red is 10 or more.

The last return density image clearly shows how the density increases to over 10 pulses per square meter in all areas of flightline overlap. However, as there are large parts covered by only one flightline their density is the one that should be considered. We note great variations in density along the flightlines. Those have to do with aircraft movement and in particular with the pitch. When the nose of the plane goes up even slightly, the gigantic “fan of laser pulses” (that can be thought of as being rigidly attached at the bottom perpendicular to the aircraft flight direction) is moving faster forward on the ground far below and therefore covers it with fewer shots per square meter. Conversely when the nose of the plane goes down the spacing between scan lines far below the plane are condensed so that the density increases. Inclement weather and the resulting airplane pitch turbulence can have a big impact on how regular the laser pulse spacing is on the ground. Read this article for more on LiDAR pulse density and spacing.

LiDAR Preparation

When you have airborne LiDAR in flightlines the first step is to tile the data into square tiles that are typically 1000 by 1000 or – for higher density surveys – 500 by 500 meters in size. This can be done with lastile. We also add a buffer of 30 meters to each tile. Why buffers are important for tile-based processing is explained here. We choose 30 meters as this is larger than any building we expect in this area and slightly larger than the ‘-step’ size we use later when classifying the points into ground and non-ground points with lasground.

lastile -i Ayutthaya\strips_raw\*.laz ^
        -tile_size 500 -buffer 30 -flag_as_withheld ^
        -odir Ayutthaya\tiles_raw -o ayu.laz

NOTE: Usually you will have to add ‘-files_are_flightlines’ or ‘-apply_file_source_ID’ to the lastile command shown above in order to preserve the information which points is from which flightline. We do not have to do this here as evident from the lasinfo reports we generated earlier. Not only is the file source ID in the LAS header is correctly set to 2, 3, or 4 reflecting the intended flightline numbering evident from the file names. Also the point source ID of each point is already set to the correct value 2, 3, or 4. For more info see this or this discussion from the LAStools user forum.

Next we classify isolated points that are far from most other points with lasnoise into the (default) classification code 7. See the README file for the meaning of the parameters and play around with different setting to get a feel for how to make this process more or less aggressive.

lasnoise -i Ayutthaya\tiles_raw\ayu*.laz ^
         -step_xy 4 -step_z 2 -isolated 5 ^
         -odir Ayutthaya\tiles_denoised -olaz ^
         -cores 4

Especially for ground classification it is important that low noise points are excluded. You should inspect a number of tiles with lasview to make sure the low noise are all pink now if you color them by classification.

lasview -i Ayutthaya\tiles_denoised\ayu*.laz -gui

While the algorithms in lasground are designed to withstand a few noise points below the ground, you will find that it will include them into the ground model if there are too many of them. Hence, it is important to tell lasground to ignore these noise points. For the other parameters used see the README file of lasground.

lasground -i Ayutthaya\tiles_denoised\ayu*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -compute_height ^
          -odir Ayutthaya\tiles_ground -olaz ^
          -cores 4

You should visually check the resulting ground classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again, use arrow keys to walk) and then switch back and forth between a triangulation of the ground points (press ‘g’ and then press ‘t’) and a triangulation of last returns (press ‘l’ and then press ‘t’). See the README of lasview for more information on those hotkeys.

lasview -i Ayutthaya\tiles_ground\ayu*.laz -gui

This way I found at least one tile that should be reclassified with ‘-metro’ instead of ‘-city’ because it still contained one large building in the ground classification. Alternatively you can correct miss-classifications manually using lasview as shown in the next few screen shots.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

This is an optional step for advanced users who have a license. In case you managed to do such a manual edit and saved it as a LAY file using LASlayers (see the README file of lasview) you can overwrite the old file with:

laslayers -i Ayutthaya\tiles_ground\ayu_669500_1586500.laz -ilay ^
          -o Ayutthaya\tiles_ground\ayu_669500_1586500_edited.laz

move Ayutthaya\tiles_ground\ayu_669500_1586500_edit.laz ^
     Ayutthaya\tiles_ground\ayu_669500_1586500.laz

The next step classifies those points that are neither ground (2) nor noise (7) into building (or rather roof) points (class 6) and high vegetation points (class 5). For this we use lasclassify with the default parameters that only considers points that are at least 2 meters above the classified ground points (see the README for details on all available parameters).

lasclassify -i Ayutthaya\tiles_ground\ayu*.laz ^
            -ignore_class 7 ^
            -odir Ayutthaya\tiles_classified -olaz ^
            -cores 4

We  check the classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again) and by traversing with the arrow keys though the point cloud. You will find a number of miss-classifications. Boats are classified as buildings, water towers or complex temple roofs as vegetation, … and so on. You could use lasview to manually correct any classifications that are really important.

lasview -i Ayutthaya\tiles_classified\ayu*.laz -gui
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Before delivering the classified LiDAR tiles to a customer or another user it is imperative to remove the buffers that were carried through all computations to avoid artifacts along the tile boundary. This can also be done with lastile.

lastile -i Ayutthaya\tiles_classified\ayu*.laz ^
        -remove_buffer ^
        -odir Ayutthaya\tiles_final -olaz ^
        -cores 4

Together with the tiling you may want to deliver a tile overview file in KML format (or in SHP format) that you can easily generate with lasboundary using this command line:

lasboundary -i Ayutthaya\tiles_final\ayu*.laz ^
            -use_bb ^
            -overview -labels ^
            -o Ayutthaya\tiles_overview.kml
Image may be NSFW.
Clik here to view.

The small KML file generated b lasboundary provides a quick overview where tiles are located, their names, their bounding box, and the number of points they contain.

Derivative production

The most common derivative product produced from LiDAR data is a Digital Terrain Model (DTM) in form of an elevation raster. This can be obtained by interpolating the ground points with a triangulation (i.e. a Delaunay TIN) and by sampling the TIN at the center of each raster cell. The pulse density of well over 4 shots per square meter definitely supports a resolution of 0.5 meter for the raster DTM. From the ground-classified tiles with buffer we compute the DTM using las2dem as follows:

las2dem -i Ayutthaya\tiles_ground\ayu*.laz ^
        -keep_class 2 ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dtm -obil ^
        -cores 4

It’s important to add ‘-use_tile_bb’ to the command line which limits the raster generation to the original tile sizes of 500 by 500 meters in order not to rasterize the buffers that are extending the tiles 30 meters in each direction. We used the BIL format so that we inspect the resulting elevation rasters with lasview:

lasview -i Ayutthaya\tiles_dtm\ayu*.bil -gui

To create a hillshaded version of the DTM you can use your favorite raster processing package such as GDAL or GRASS but you could also use the BLAST extension of LAStools and create a large seamless image with blast2dem as follows:

blast2dem -i Ayutthaya\tiles_dtm\ayu*.bil -merged ^
          -step 0.5 -hillshade -epsg 32647 ^
          -o Ayutthaya\dtm_hillshade.png

Because blast2dem does not parse the PRJ files that accompany the BIL rasters we have to specify the EPSG code explicitly to also get a KML file that allows us to visualize the LiDAR in Google Earth.

Image may be NSFW.
Clik here to view.

A a hillshading of the merged DTM rasters produced with blast2dem.

Next we generate a Digital Surface Model (DSM) that includes the highest objects that the laser has hit. We use the spike-free algorithm that is implemented in las2dem that creates a triangulation of the highest returns as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 ^
        -step 0.5 -spike_free 1.2 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

We used 1.0 as the freeze value for the spike free algorithm because this is about three times the average last return spacing reported in the individual lasinfo reports generated during quality checking. Again we inspect the resulting rasters with lasview:

lasview -i Ayutthaya\tiles_dsm\ayu*.bil -gui

For reason of comparison we also generate the DSM rasters using a simple first-return interpolation again with las2dem as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 -keep_first ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

A few direct side-by-side comparison between a spike-free DSM and a first-return DSM shows the difference that are especially noticeable along building edges and in large trees.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Another product that we can easily create are building footprints from the automatically classified roofs using lasboundary. Because the tool is quite scalable we can simply on-the-fly merge the final tiles. This also avoids including duplicate points from the tile buffer whose classifications are also often less accurate.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
            -keep_class 6 ^
            -disjoint -concavity 1.1 ^
            -o Ayutthaya\buildings.shp

Similarly we can use lasboundary to create a vegetation layer from those points that were automatically classified as high vegetation.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
             -keep_class 5 ^
             -disjoint -concavity 3 ^
             -o Ayutthaya\vegetation.shp

We can also produce 1.0 meter contour lines from the ground classified points. However, for nicer contours it is beneficial to first generate a subset of the ground points with lasthin using option ‘-contours 1.0’ as follows:

lasthin -i Ayutthaya\tiles_final\ayu*.laz ^
        -keep_class 2 ^
        -step 1.0 -contours 1.0 ^
        -odir Ayutthaya\tiles_temp -olaz ^
        -cores 4

We then merge all subsets of ground points from those temporary tiles on-the-fly into one (using the ‘-merged’ option) and let blast2iso from the BLAST extension of LAStools generate smoothed and simplified 1 meter contours as follows:

blast2iso -i Ayutthaya\tiles_temp\ayu*.laz -merged ^
          -iso_every 1.0 ^
          -smooth 2 -simplify_length 0.5 -simplify_area 0.5 -clean 5.0 ^
          -o Ayutthaya\contours_1m.shp

Finally we composite all of our derived LiDAR products into one map using QGIS and then fuse it with data from OpenStreetMap that we’ve downloaded from BBBike. Here you can download the OSM data that we use.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

It’s in particular interesting to compare the building footprints that were automatically derived from our LiDAR processing pipeline with those mapped by OpenStreetMap volunteers. We immediately see that there is a lot of volunteering work left to do and the LiDAR-derived data can assist us in directing those mapping efforts. A closer look reveals the (expected) quality difference between hand-mapped and auto-generated data.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The OSM buildings are simpler. These polygons are drawn and divided into logical units by a human. They are individually verified and correspond to actual buildings. However, they are less aligned with the Google Earth satellite image. The LiDAR-derived buildings footprints are complex because lasboundary has no logic to simplify the complicated polygonal chains that enclose the points that were automatically classified as roof into rectilinear shapes or to break directly adjacent roof points into separate logical units. However, most buildings are found (but also objects that are not buildings) and their geospatial alignment is as good as that of the LiDAR data.

LASmoons: Maria Kampouri

Maria Kampouri (recipient of three LASmoons)
Remote Sensing Laboratory, School of Rural & Surveying Engineering
National and Technical University of Athens, GREECE

Background:
The Aralar Natural Park, famous for its stunning landscapes, is located in the southeast of the province of Gipuzkoa, sharing a border with the neighboring province of Navarre. Inside the park there are nature reserves of exceptional importance, such as beech woods, large number of yew trees, very singular species of flora and fauna and areas of exceptional geological interest. Griffon vultures, Egyptian vultures, golden eagles and even bearded vultures (also known as lammergeier) can be seen flying over this area. European minks and Pyrenean desmans can be found in the streams and rivers that descend from the mountain tops.

The concept of biodiversity is based on inter- and intra-species genetic variation and has been evolving over the past 25 years. The importance of mapping biodiversity in order to plan its conservation, as well as identifying patterns in endemism and biodiversity hot-spots, have been pillars for EU and global environmental policy and legislation. The coupling of remote sensing and field data can increase reliability, periodicity and reproduce-ability of ecosystem process and biodiversity monitoring, leading to an increasing interest in environmental monitoring, using data for the same areas over time. Natural processes and complexity are best explored by observing ecosystems or landscapes through scale alteration, using spatial analysis tools, such as LAStools.

Image may be NSFW.
Clik here to view.

DTM generated with restricted version of las2dem above point limits

Goal:
The aim of this study is to investigate the potential use of LiDAR data for the identification and determination of forest patches of particular interest, with respect to ecosystem dynamics and biodiversity and to produce a relevant biodiversity map, based on Simpson’s Diversity Index for Aralar Natural Park.

Data:
+
 approximately 123 km^2 of LiDAR in 1km x 1km LAS tiles
+ Average point density: 2 pts/m^2
+ Spatial referencing system: ETRS89 UTM zone 30N with elevations on the EGM08 geoid. Data from LiDAR flights are These files were obtained from the LiDAR flight carried out in 2008 by the Provincial Council of Gipuzkoa and the LiDAR flights of the Basque Government.

LAStools processing:
1) data quality checking [lasinfolasoverlaplasgridlasreturn]
2) classify ground and non-ground points [lasground]
3) remove low and high outliers [lasheight, lasnoise]
4) identify buildings within the study area [lasclassify]
5) create DTM tiles with 0.5 step in ‘.bil’ format [las2dem]
6) create DSM tiles with 0.5 step in ‘.bil’ format [las2dem]
7) create a normalized point cloud [lasheight]
8) create a highest-return canopy height model (CHM) [lasthin, las2dem]
9) create a pit-free (CHM) with the spike-free algorithm [las2dem]
10) create various rasters with forest metrics [lascanopy]

The generated elevation and forest metrics rasters are then combined with satellite data to create a biodiversity map, using Simpson’s Diversity Index.

Viewing all 82 articles
Browse latest View live