Skip to content

Metashape step-by-step tutorial for creating point clouds, orthomosaic, DEM and meshed model from arial images

License

Notifications You must be signed in to change notification settings

VietDucNg/Metashape-photogrammetry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Metashape icon

Metashape-photogrammetry

Metashape step-by-step tutorial using GUI and Python API for photogrammetry (point clouds, DEM, mesh, texture and orthomosaic) from arial images.


Credits

The tutorial prepared by Viet Nguyen (Earth Observation and Geoinformation Science Lab - University of Greifswald) based on

uav photogrammetry image(https://www.inrae.fr/en/news/remote-sensing-dossier)


Table of Contents

Project structure

Getting started

  1. Add photos
  2. Estimate image quality
  3. Reflectance calibration
  4. Set primary channel
  5. Image projection
  6. Align photos
  7. Add ground control points
  8. Improve alignment
    - 8.1 Optimize Camera Alignment
    - 8.2 Filter uncertain points
    - 8.3 Filter by Projection accuracy
    - 8.4 Filter by Reprojection Error
  9. Dense point cloud
  10. Mesh model
  11. Orthomosaic
  12. DEM
  13. Texture

Documenting


Code and version

The tutorial not only guide the main steps of photogrammetry in Metashape GUI, but there also are Python scripts for those steps to use in Metashape Python console. The scripts were designed for Metashape version 1.8.4.


Project structure

It is recommended to use the standardised project structure (or something similar) throughout all future projects.

{project_directory} (The folder with all files related to this project)
|   overview_img.{ext}
|   description.txt
├───config (where you place your configuration files)
        {cfg_0001}.yml
        {cfg_0002}.yml
        ...
├───data (where you unzipped the files to)
├───────f0001 (The folder with images acquired on the first flight)
|           {img_0001}.{ext}
|           {img_0002}.{ext}
|           ...
├───────f0002 (The folder with images acquired on the second flight)
|           {img_0001}.{ext}
|           {img_0002}.{ext}
|           ...
|       ...
├───────f9999 (The folder with images acquired on the last flight)
|           {img_0001}.{ext}
|           {img_0002}.{ext}
|           ...
├───────gcps
|           (...)
├───────GNSS
|           (...)
├───export (where you place export models and files)
        ...
└───metashape (This is where you save your Agisoft Metashape projects to)
        {metashape_project_name}.psx
        .{metashape_project_name}.files
        {metashape_project_name}_processing_report.pdf
        (optionally: {metashape_project_name}.log)

The standardised project structures are important for automated processing and archiving.


Getting started

Tip

Below are step-by-step guildance in Metashape GUI and Python scripts for those steps. For fully automate workflow, use the GUI for step 1 to step 7 (add GCPs), the next steps can use the code for all-in-one workflow here

1. Add photos

It is helpful to include the subfolder name in the photo file name in Metashape (to differentiate photos from which flight). Below is the code for Python console to rename all photos to reflect the subfolder they are in.

import Metashape 
from pathlib import Path

doc = Metashape.app.document # accesses the current project and document
chunk = doc.chunk # access the active chunk

for c in chunk.cameras: # loops over all cameras in the active chunk
    cp = Path(c.photo.path) # gets the path for each photo
    c.label = str(cp.parent.name) + '/' + cp.name # renames the camera label in the metashape project to include the parent directory of the photo

Images from MicaSense RedEdge, MicaSense Altum, Parrot Sequoia and DJI Phantom 4 Multispectral can be loaded at once for all bands. Open Workflow menu and choose Add Photos option. Select all images including reflectance calibration images and click OK button. In the Add Photos dialog choose Multi-camera system option:

add photo

Metashape Pro can automatically sort out those calibration images to the special camera folder in the Workspace pane if the image meta-data says that the images are for calibration. The images will be disabled automatically (not to be used in actual processing).

2. Estimate image quality

This is done by right clicking any of the photos in a Chunk, then selecting Estimate Image Quality…, and select all photos to be analysed, as shown in figure below.

estimate image quality

Open the Photos pane by clicking Photos in the View menu. Then, make sure to view the details rather than icons to check the Quality for each image.

Tip

Then, filter on quality and Disable all selected cameras that do not meet the standard. Agisoft recommends a Quality of at least 0.5.

3. Reflectance calibration

Open Tools Menu and choose to Calibrate Reflectance option. Press Locate Panels button:

reflectance calibration

As a result, the images with the panel will be moved to a separate folder and the masks would be applied to cover everything on the images except the panel itself. If the panels are not located automatically, use the manual approach.

4. Set primary channel

For multispectral imagery the main processing steps (e.g., Align photos) are performed on the primary channel. Change the primary channel from the default Blue band to NIR band which is more detailed and sharp.

set primary channel

5. Image projection

Go to Convert in Reference panel and select the desired CRS for the project.

image projection

6. Align photos

Below are recommended settings for photo alignment. The code to use in Python console can be found here.

align photo

7. Add ground control points

Go to Import Reference in the Reference panel and load the csv file.

Follow this tutorial to set the gcp.

8. Improve alignment

The following optimizations to improve quality of the sparse point cloud including Optimize Camera Alignment, Filter uncertain points, Filter by Projection accuracy, Filtering by Reprojection Error. Those optimizations can be automated by Python console using this code.

Note

Save project and backup data before any destructive actions

8.1 Optimize Camera Alignment

This is done by selecting Optimize Cameras from the Tools menu

optimize camera

Change the model view to show the Point Cloud Variance. Lower values (=blue) are generally better and more constrained.

8.2 Filter uncertain points

filter uncertain points

A good value to use for uncertainty lever is 10, though make sure do not remove all points by doing so!. A rule of thumb is to select no more than two-thirds to half of all points, and then delete these by pressing the Delete key on the keyboard.

Tip

After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment

8.3 Filter by Projection accuracy

This time, select the points based on their Projection accuracy, aiming for a final Projection accuracy of 3.

filter by projection accuracy

Tip

After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment

8.4 Filter by Reprojection Error

A good value to use here is 0.3, though make sure you do not remove all points by doing so! As a rule of thumb, this final selection of points should leave you with approx. 10% of the points you started off with.

filter by reprojection error

Tip

After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment

9. Dense point cloud

Select Build Point Cloud from the Workflow menu. Below are recommended settings, the code to use in Python API can be found here.

dense point cloud

Visualise the point confidence by clicking the gray triangle next to the nine-dotted icon and selecting Point Cloud confidence. The color coding (red = bad, blue = good).

9.1. Filter by point confidence

Open Tools/Point Cloud in the menu and click on Filter by confidence… The dialog that pops up allows you to set minimal and maximal confidences. For example, try setting Min:50 and Max:255. After looking at the difference, reset the filter by clicking on Reset filter within the Tools/Point Cloud menu.

10. Mesh model

mesh example

Selecting Build Mesh from the Workflow menu, you will be able to chose either Dense cloud or Depth map as the source. The code for Build Mesh to use in Python API can be found here.

Tip

Depth maps may lead to better results when dealing with a big number of minor details, but Dense clouds should be used as the source. If you decide to use depth maps as the source data, then make sure to enable Reuse depth maps to save computational time!

Mesh

10.1. Filter the mesh

Sometimes your mesh has some tiny parts that are not connected to the main model. These can be removed by the Connected component filter.

filter mesh

10.2. Decimate mesh

Select Tools-> Mesh->Decimate mesh. Enter an appropriate value, for example, to halve the number of faces in the original mesh.

10.3. Smooth mesh

Select Tools ->Mesh->Smooth mesh. The strength of smoothing depends on the complexity of canopy. Three values are recommended for low, medium, and high smoothing: 50, 100 and 200 respectively.

11. Orthomosaic

Select Build Orthomosaic from the Workflow menu. To begin, you have to select the Projection parameter.

  • Geographic projectionis often used for aerial photogrammetric surveys.

  • Planar projection is helpful when working with models that have vertical surfaces, such as vertical digital outcrop models.

  • Cylindrical projection can help reduce distortions when projecting cylindrical objects like tubes, rounded towers, or tunnels.

It is recommended to use Mesh as surface. For complete coverage, enable the hole filling option under Blending mode to fill in any empty areas of the mosaic.

orthomosaic

The code for Build orthomosaic to use in Python API can be found here.

12. DEM

Select Build DEM from the Workflow menu. The code for #Buil DEM# to use in Python API can be found here.

build DEM

It is recommended to use Point Cloud as the source data since it provides more accurate results and faster processing.

it ist recommended to keep the interpolation parameter Disabled for accurate reconstruction results since only areas corresponding to point cloud or polygonal points are reconstructed. Usually, this method is recommended for Mesh and Tiled Model data source.

13. Texture

Open Build Texture from the Workflow menu.

build texture

Texture size/count determines the quality of the texture. Anything over 16384 can lead to very large file sizes on your harddisk. On the other hand, anything less than 4096 is probably insufficient. For greatest compatibility, keep the Texture size at 4096, but increase the count to e.g. 5 or 10.


Documenting

Open File/Export and select Generate Report… Store the report in the metashape folder with the project file.