Skip to content

ShuzhaoXie/MesonGS

Repository files navigation

MesonGS: Post-training 3D Gaussian Compression [ECCV 2024]

Shuzhao Xie, Weixiang Zhang, Chen Tang, Yunpeng Bai, Rongwei Lu, Shijia Ge, Zhi Wang

Webpage | Paper

This repository contains the official authors implementation associated with the paper "MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation".

Abstract: 3D Gaussian Splatting demonstrates excellent quality and speed in novel view synthesis. Nevertheless, the huge file size of the 3D Gaussians presents challenges for transmission and storage. Current works design compact models to replace the substantial volume and attributes of 3D Gaussians, along with intensive training to distill information. These endeavors demand considerable training time, presenting formidable hurdles for practical deployment. To this end, we propose MesonGS, a codec for post-training compression of 3D Gaussians. Initially, we introduce a measurement criterion that considers both view-dependent and view-independent factors to assess the impact of each Gaussian point on the rendering output, enabling the removal of insignificant points. Subsequently, we decrease the entropy of attributes through two transformations that complement subsequent entropy coding techniques to enhance the file compression rate. More specifically, we first replace rotation quaternions with Euler angles; then, we apply region adaptive hierarchical transform to key attributes to reduce entropy. Lastly, we adopt finer-grained quantization to avoid excessive information loss. Moreover, a well-crafted finetune scheme is devised to restore quality. Extensive experiments demonstrate that MesonGS significantly reduces the size of 3D Gaussians while preserving competitive quality.

1. Cloning the Repository

git clone https://github.com/ShuzhaoXie/MesonGS.git

2. Install

2.1 Hardware and Software Requirements

  • CUDA-ready GPU with Compute Capability 7.0+
  • Ubuntu >= 18.04
  • Conda (recommended for easy setup)
  • C++ Compiler for PyTorch extensions (we recommend Visual Studio 2019 for Windows)
  • CUDA >= 11.6
  • C++ Compiler and CUDA SDK must be compatible

2.2 Setup

Our provided install method is based on Conda package and environment management:

CUDA 11.6/11.8, GPU 3090/4090/V100:

sudo apt install zip unzip
conda env create --file environment.yml
conda activate mesongs
pip install plyfile tqdm einops scipy open3d trimesh Ninja seaborn loguru pandas torch_scatter

CUDA 12.1/12.4, GPU 3090/4090:

sudo apt install zip unzip
conda create -n mesongs python=3.10
conda activate mesongs
pip install torchaudio==2.1.0+cu121 torchvision==0.16.0+cu121 --index-url https://download.pytorch.org/whl/cu121
pip install plyfile tqdm einops scipy open3d trimesh Ninja seaborn loguru pandas torch_scatter
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn
pip install submodules/weighted_distance

2.3 Enviroment Variables

  • Replace the MAIN_DIR in utils/system_utils.py with your dir path.
  • prepare directories:
    ## cd to your path of mesongs
    mkdir output
    mkdir data
    mkdir exp_data
    mkdir exp_data/csv
    

2.4 Preparing dataset and pre-trained 3D Gaussians

You can download a sample checkpoint of mic scene from here [68 MB]. Then:

  1. Unzip and put the checkpoint directory into the output directory.

    .
    ├── cameras.json
    ├── cfg_args
    ├── input.ply
    └── point_cloud
        └── iteration_30000
            └── point_cloud.ply
  2. Put the datasets into the data directory. Our implementation only supports the datasets listed below.

    .
    ├── 360_v2
    │   ├── bicycle
    │   ├── bonsai
    │   ├── counter
    │   ├── flowers.txt
    │   ├── garden
    │   ├── kitchen
    │   ├── room
    │   ├── stump
    │   └── treehill.txt
    ├── db
    │   ├── drjohnson
    │   └── playroom
    ├── nerf_synthetic
    │   ├── chair
    │   ├── drums
    │   ├── ficus
    │   ├── hotdog
    │   ├── lego
    │   ├── materials
    │   ├── mic
    │   ├── README.txt
    │   └── ship
    └── tandt
        ├── train
        └── truck

3. Running

To run the MesonGS, using:

SCENENAME=mic

CUDA_VISIBLE_DEVICES=0 python mesongs.py -s data/nerf_synthetic/$SCENENAME \
    --given_ply_path output/$SCENENAME/point_cloud/iteration_30000/point_cloud.ply \
    -w --eval \
    --iteration 10 \
    --scene_name $SCENENAME \
    --csv_path exp_data/csv/meson_$SCENENAME.csv \
    --model_path output/meson_$SCENENAME
  • Set --iteration to 0 for compression without finetuning.
  • Add --skip_post_eval to skip tedious testing process.
  • Check scripts dir for more examples.

3.1 Rendering Compressed File

To render compressed file, take mic as an example, run:

MAINDIR=/your/path/to/mesongs
DATADIR=/your/path/to/data
CKPT=meson_mic
SCENENAME=mic

python render.py -s $DATADIR/nerf_synthetic/$SCENENAME \
    --given_ply_path $MAINDIR/output/$CKPT/point_cloud/iteration_0/pc_npz/bins.zip \
    --eval --skip_train -w \
    --dec_npz \
    --scene_name $SCENENAME \
    --csv_path $MAINDIR/exp_data/csv/test_$CKPT.csv \
    --model_path $MAINDIR/output/$CKPT

4. BibTeX

@inproceedings{xie2024mesongs,
    title={MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation},
    author={Xie, Shuzhao and Zhang, Weixiang and Tang, Chen and Bai, Yunpeng and Lu, Rongwei and Ge, Shijia and Wang, Zhi},
    booktitle={European Conference on Computer Vision},
    year={2024},
    organization={Springer}
}

5. TODO

  • Upload the version that configed by number of blocks instead of the length.

6. Contributions

Some source code of ours is borrowed from 3DGS, 3DAC, c3dgs, LightGuassian, and ACRF. We sincerely appreciate the excellent works of these authors.

7. Funding and Acknowledgments

This work is supported in part by National Key Research and Development Project of China (Grant No. 2023YFF0905502) and Shenzhen Science and Technology Program (Grant No. JCYJ20220818101014030). We thank anonymous reviewers for their valuable advice and JiangXingAI for sponsoring the research.