Code release for BiGS: Bidirectional Primitives for Relightable 3D Gaussian Splatting, 3D Vision 2025; Liu Zhenyuan, Guo Yu, Xinyuan Li, Bernd Bickel, Ran Zhang.
Train relightable Gaussian splats with OLAT (one-light-a-time) datasets, and relight with environment maps and point lights in real time.
For more details, check out our project page and our paper.
The code is tested on Ubuntu 20.04, CUDA 12.2. Please clone the repo and create a conda environment with environment.yml
.
git clone [email protected]:desmondlzy/bigs.git && cd bigs
conda env create -f environment.yml
A simple one liner will call point_relight.py
and it does all the dirty work for you, like fetching the data and checkpoints from huggingface.
Just have a coffee and wait for the relight results being produced into --output-path
.
python scripts/point_relight.py --use-pretrained dragon-100000 --output-path bigs-outputs
Similarly, envmap_relight.py
can also run with a pretrained model.
python scripts/envmap_relight.py --use-pretrained dragon-100000 --output-path bigs-outputs
Download our OLAT data (huggingface) into data/bigs
.
The whole dataset takes around 17 GB of disk space. Huggingface download can be made with
huggingface-cli download --repo-type=dataset desmondlzy/bigs-data --local-dir data/bigs
Alternatively, you may only download one scene via the --include
option. The scene names can be found as the folder names in the repo.
# downloading the `dragon` scene
huggingface-cli download --repo-type=dataset desmondlzy/bigs-data --include "dragon/*" --local-dir data/bigs
You should be able to see the folders with scene names in the data/bigs
directory.
The training has two steps: (1) train a normal Gaussian Splat model, preferrably using datasets with neutral lighting condition; (2) train the lighting components using OLAT data.
Step 1: Train a Gaussian Splats model.
here we use a customized tweaked version of splatfacto
method provided by nerfstudio
.
You could also use splatfacto
or other splatting methods by writing your own nerfstudio extension for splatfacto
, your model class would need to subclass SplatfactoModel
.
ns-train mask-splat \
--experiment-name dragon \
--pipeline.model.cull-alpha-thresh 0.005 \
--pipeline.model.random-scale 1.2 \
bigs-blender \
--data data/bigs/dragon/olat_all_on
Step 2: Invoke the training script for the relightablity training. On our Nvidia A100 GPU, training 100K iteration takes around 1.5 hours to complete.
python scripts/training.py \
--dataset-root data/bigs/dragon \
--ns-gaussian-config outputs/dragon/mask-splat/<config-path>/config.yml \
--output-path ./bigs-output/dragon/training
After running this script, the BiGS checkpoint can be found in bigs-output/dragon/training
directory, named with model_{iter}.pth
.
Then we load the trained BiGS model, and render it under a new lighting conditions. You can also run these scripts with pretrained models as described above.
Run the point_relight.py
for relighting with a point light source.
You can find the rendered videos and a json file containing the metrics in the the output path directory after the script finishes.
python scripts/point_relight.py \
--dataset-root data/bigs/dragon \
--checkpoint-path ./bigs-output/dragon/training/model_100000.pth \
--output-path ./bigs-output/dragon/point-relight
Run the envmap_relight.py
for relighting with an environment map. An example envmap is provided in data/envmaps/gear-store.exr
.
You can find the rendered videos in the the output path directory after the script finishes.
python scripts/envmap_relight.py \
--dataset-root data/bigs/dragon \
--checkpoint-path bigs-output/dragon/training/model_100000.pth \
--output-path bigs-output/dragon/envmap-relight \
--envmap data/envmaps/gear-store.exr
We thank Andreas Mischok for the envmap peppermint-powerplant
(polyhaven.com, CC0);
Dimitrios Savva and Jarod Guest for the envmap gear-store
(polyhaven.com, CC0);
Authors of gsplat
for developing the amazing package.
Our synthetic data is generated using Mitsuba;
Special thanks to Changxi Zheng for supporting the internship program at Tencent Pixel Lab.