For the task of semantic segmentation, we measure the performance of different methods using the mean intersection-over-union (mIoU) over all classes. The table shows the available models and datasets for the segmentation task and the respective scores. Each score links to the respective weight file.
Model / Dataset | SemanticKITTI | Toronto 3D | S3DIS | Semantic3D | Paris-Lille3D |
---|---|---|---|---|---|
RandLA-Net (tf) | 53.7 | 69.0 | 67.0 | 76.0 | 70.0 |
RandLA-Net (torch) | 52.8 | 71.2 | 67.0 | 76.0 | 70.0 |
KPConv (tf) | 58.7 | 65.6 | 65.0 | - | 76.7 |
KPConv (torch) | 58.0 | 65.6 | 60.0 | - | 76.7 |
The following are the models we implemented in this model zoo.
- KPConv (github): KPConv: Flexible and Deformable Convolution for Point Clouds.
- RandLA-Net (github) RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds.
The following is a list of datasets for which we provide dataset reader classes.
- SemanticKITTI (project page)
- Toronto 3D (github)
- Semantic 3D (project-page)
- S3DIS (project-page)
- Paris-Lille 3D (project-page)
For downloading these datasets visit the respective webpages and have a look at the scripts in scripts/download_datasets
.