This repo contains a more easy-to-use implementation of KPConv based on PyTorch.
KPConv is a powerfull point convolution for point cloud processing. However, the original PyTorch implementation of KPConv has the following drawbacks:
- It relies on heavy data preprocessing in the dataloader
collate_fn
to downsample the input point clouds, so one has to rewrite thecollate_fn
to work with KPConv. And the data processing is computed on CPU, which may be slow if the point clouds are large (e.g., KITTI). - The network architecture and the configurations of KPConv is fixed in the config file, and only single-branch FCN architecture is supported. For more complicated tasks, this is inflexible to build up multi-branch networks.
To use KPConv in more complicated networks, we build this repo with the following modifications:
- GPU-based grid subsampling and radius neighbor searching. To accelerate kNN searching, we use KeOps. This enables us to decouple grid subsampling with data loading.
- Rebuilt KPConv interface. This enables us to insert KPConv anywhere in the network. All KPConv modules are rewritten to accept four inputs:
s_feats
: features of the support points.q_points
: coordinates of the query points.s_points
: coordinates of the support points.neighbor_indices
: the indices of the neighbors for the query points.
- Optional normalization with a simple argument:
None
,BatchNorm
,InstanceNorm
,GroupNorm
andLayerNorm
. - Optional activation with a simple argument:
None
,ReLU
,LeakyReLU
,ELU
,GELU
,Sigmoid
,Softplus
,Tanh
,Identity
.
Use the following command for installation:
python setup.py develop
We provide an example on S3DIS scene segmentation task in examples/scene_segmentation
.
We use Vision3d-Engine for training and testing. Refer to Vision3d-Engine for installation.
- Download data from S3DIS official site.
- Run
examples/scene_segmentation/preprocess_s3dis.py
for data preprocessing.
python trainval.py --test_area=Area_5
python test.py --test_epoch=EPOCH_ID --test_area=Area_5