Skip to content

New features in Meshroom 2025.1

natowi edited this page Aug 19, 2025 · 32 revisions

(Note: This is a pre-release article. More details and Images will be added within the next days)

This is a short overview of some amazing new features of the Meshroom 2025.1 Release.
It is a huge release with 330 implemented changes in Meshroom and 253 in AliceVision.
You will be able to read the full release notes (here AliceVision) and (here Meshroom)

There are many User Interface updates throughout Meshroom.
Please indulge us, as some features are not yet fully refined.

You can filter for known issues using this link
Feel free to create a Feature Request (New Issue).

UI updates and changes

When you start the new release, you will notice the new project page:

grafik

From here you will be able to start Meshroom with your preferred Pipeline template or open existing Projects.

You can choose from 24 different Pipeline Templates, start from scratch or upgrade an existing project.

The main Meshroom application will look familiar to you, but there are noticeable changes.

grafik

The menu has more entries, allowing to manage projects and templates, clearing images or starting/stopping the graph computation. You still have the option to start/stop the graph computation with a single button, but it is less prominent than before in the upper middle of the screen.

The Image Viewer now has a sequence player, which is useful for preview.

The 3D Viewer now has a light controller, which can be used to change lightning in the 3D scene.

You will also notice a new tab next to the Graph Editor and Task Manager called "Script Editor".
It is part of the new Plugin System.

The new Plugin System [experimental]

A reason, why this release took this much time, was the internal restructuring of Meshroom to make it a more versatile node-based visual programming toolbox that allows to easily implement custom nodes.
This required some untangling of Meshroom and AliceVision.
Meshroom is still bundled with AliceVision, but it is now easier than ever to add your own nodes or support for custom code/AI/ML models and software.

Note: Currently you will need a Developer Version of Meshroom to test the new Plugins on Meshroom Hub.

We're excited to introduce new experimental Machine Learning plugins available on github.com/meshroomHub. These plugins showcase the future of Meshroom workflows, though they currently require some development expertise (cloning repositories, installing dependencies, and configuring Python environments) and cannot be installed through the user interface yet.

  • mrGSplat: Gaussian Splat optimization and rendering
  • mrDepthEstimation: Monocular depth inference
  • mrDenseMotion: Optical flow estimation
  • mrRoma: Dense deep feature matching
  • mrIntrinsicImageDecomposition: Albedo, normals, and material extraction
  • mrDeblurring: Video deblurring
  • mrGeolocation: GPS extraction and geographic models download

As you can see, it is not possible to bundle all features that are available as Plugins in the main Meshroom release due to the many dependencies and file size.
With the Plugin system you will be able to only install the functionality you need.

The new AI generated documentation will be of huge help https://deepwiki.com/alicevision/Meshroom to easily create your own nodes for Meshroom.

Photogrammetry

First tests showed that the computation time of the default pipeline did not change much. It can be compared to the 2023 release.
However the quality of the reconstruction improved, especially if you use the new segmentation node to generate masks. In one case, DepthMap computation time was reduced from 15 minutes to 5 minutes by using the new masking/segmentation feature. The speed up is possible, because areas we don´t need anyway will not be calculated. It also drastically improved the final model completeness and quality. grafik View on Sketchfab: 2023.2, 2025.1 Draft Meshing, Default Photogrammetry Pipeline with Depth reconstruction

Object Reconstruction & Segmentation

grafik

View on Sketchfab

It is now possible to perform targeted reconstruction with automatic object segmentation.
The ImageDetectionPrompt and ImageSegmentationBox nodes generate bounded boxes corresponding to the input text prompt.
The Segment Anything model is used to generate a binary mask. This will vastly improve the results of turntable reconstructions or object reconstruction that requires merging two sides.
The segmentation model is bundled with Meshroom and allows for convenient mask generation and you have the option to choose between cpu and gpu. Once the Mask has been fed into the PrepareDenseScene node and executed, it is possible to double click o the node to preview the images with masks applied.

There are three modes:

  • Object Reconstruction:
    New ImageDetectionPrompt and ImageSegmentationBox nodes are added to the default pipeline, parallel to the Sfm related nodes. This is done in parallel, so the surrounding can be used if needed. Image segmentation can than be used to focus on the object of interest.
grafik
  • Object Reconstruction Turn Table:
    ImageDetectionPrompt and ImageSegmentationBox are inserted into the default photogrammetry pipeline prior FeatureExtraction. This is done to remove the static background that otherwise would cause trouble with the reconstruction.
grafik
Turn table image and segmented image:
grafik
  • Object Reconstruction Two Sides:
    A more complex pipeline having two Object Reconstruction pipelines, one for each side and a step to merge both. grafik
    To add images from Side A, add them to Image Viewer group 1
    grafik
    To add images from Side B, add them to Image Viewer group 2
    To merge reconstruct two sides, we have two reconstruction pipelines with background removal using masks generated by image segmentation. The masked out parts are matched to each other and the two individual reconstructed sides are merged. After the SfM the steps finish with default depth reconstruction.

RTI and Multiview Photometric Stereo

The capabilities of Meshroom for Reflectance Transformation Imaging (RTI) has been improved by adding an interactive visualization of albedo and normal maps with real-time lighting control to the Viewer. What is unique is the new MultiView Photometric Stereo pipeline that allows to reconstruct advanced surface detail with multiple light sources for each viewpoint.
For best results a ScanRig or a LightDome (TBA) is recommended to fully utilise this capability. Manually moving the light source is also possible.
There currently is no other free open source software solutions with ready to use Multiview Photometric Stereo capability on the market.

grafik 250147770-4b2f9a3c-a2b2-4c70-a3a0-21859e4dbb83

Photometric Stereo (RTI)

Please check out the tutorials on how to capture images:
Introduction to Reflectance Transformation Imaging (RTI) ->YT
Reflectance Transformation Imaging: Dataset Capturing with Highlight RTI ->YT

Before starting, add the prefix ps_ the folder with your images, otherwise the Photometric Stereo node will not be able to process the images and throw the "[warning] No images shared the same pose ID. Input images need to be located in a folder that starts with 'ps_' to be grouped together using their pose ID. The photometric stereo cannot run otherwise.".

Add the images to the Photometric Stereo graph.

You can try "Automatic Sphere Detection", however if you get the "[warning] Mismatch between the number of images and the number of light intensities (% images, % light intensities). This might happen when no sphere was detected for an image in the list, and thus light was not calibrated for it." you need to manually set the Sphere.

grafik

Once computation is complete, multiple results can be shown in the Image Viewer:
grafik

Image Gallery, Albedo Maps, Normal Maps World, Normal Maps Camera (false colors), Normal Maps Camera grafik

Note: While the "Display Phong Lighting: Photometric Stereo" is active, the other view modes cannot be used
grafik

Now we can change the lighting: grafik

MultiView Photometric Stereo

We now have a graph template to run Photometric Stereo on multiple views. grafik

The folder also has to start with ps_ additionally, the image set without changing light conditions must have "ambient" in the file name. Those files will be used for SfM. The SfM Filter node separates the images and passes them on to the SfM or Photometric nodes. grafik

The final model can then be relit:

E6CA087A-7121-4914-AC30-37AD3B2B216D

Point cloud and LIDAR

There is first experimental support for point clouds and (E57, pc.ply) and meshing capabilities, allowing to add Lidar Scans to the mix.

Blender preview

The ScenePreview node uses Blender to visualize a 3D model from a given set of cameras. The cameras must be a SfMData file in JSON format. For the 3D model it supports both point clouds in Alembic format and meshes in OBJ format. One frame per viewpoint will be rendered, and the undistorted views can optionally be used as background.

Survey point injection

Inject survey points (3D world coordinates + 2D coordinates in a given frame) using a json file generated from 3D equalizer. Add survey points to sfmData and use survey points to constraint the pose estimation in the bundle adjustement.

Graph Editor - Selecting Nodes

grafik Shift+dragging the mouse will open a selection area CTR+Click can select/deselect nodes

Node Attributes can now be reduced: grafik

A search for attributes is also available: grafik


...more to come

Note: The "Live Reconstruction" feature has been removed from the 2025.1 release.

Clone this wiki locally