Skip to content

chenkangjie1123/Co-Adaptation-of-3DGS

Repository files navigation

🌈 Quantifying and Alleviating Co-Adaptation in
Sparse-View 3D Gaussian Splatting

Kangjie Chen1 Β Β  Yingji Zhong2 Β Β  Zhihao Li3 Β Β  Jiaqi Lin1
Youyu Chen4 Β Β  Minghan Qin1 Β Β  Haoqian Wang1 πŸ“ͺ
πŸ“ͺ corresponding author
1 Tsinghua University Β Β  2 HKUST Β Β  3 Huawei Noah’s Ark Lab Β Β 
4 Harbin Institute of Technology

Project Page arXiv Paper Videos Compare

🧭 Quick Navigation

πŸ”§ Integration into Your Project !!!!

Setup, Training and Evaluation for Co-Adaptation-of-3DGS

Why Color Artifacts in Sparse-View 3DGS?

(or Why Dropout Works in Sparse-View 3DGS?)

(or Why Noise Injection Works in Sparse-View 3DGS?)

Quantitative Comparison

πŸ“– Citation

πŸ“Œ TL;DR

This paper introduces the concept of co-adaptation in 3D Gaussian Splatting (3DGS), analyzes its role in rendering artifacts, and proposes two strategies:

  • 🎲 Dropout Regularization – Randomly drops subsets of Gaussians to prevent over-co-adaptation.
  • 🌫️ Opacity Noise Injection – Adds noise to opacity values, suppressing spurious fitting and enhancing robustness.

Besides, we further explore noise injection on other Gaussian attributes and advanced dropout variants. More details can be found in the Appendix of our paper.

The code is based on Binocular3DGS. Thanks for their great work!

πŸ“Š Quantitative Comparison on LLFF and DTU Datasets

We evaluated existing 3DGS-based sparse-view reconstruction methods with and without our proposed co-adaptation suppression strategies β€” dropout regularization and opacity noise injection. We report PSNR, SSIM, LPIPS, and Co-Adaptation (CA) scores on both training and novel views to assess reconstruction quality and co-adaptation reduction. Experiments are conducted on LLFF and DTU datasets, using 3 training views following prior works (Binocular3DGS, Cor-GS, FSGS, RegNeRF, FreeNeRF). Input images are downsampled by 8Γ— for LLFF and 4Γ— for DTU relative to their original resolutions. image

🌟 Abstract

3D Gaussian Splatting (3DGS) has demonstrated impressive performance in novel view synthesis under dense-view settings. However, in sparse-view scenarios, despite the realistic renderings in training views, 3DGS occasionally manifests appearance artifacts in novel views. This paper investigates the appearance artifacts in sparse-view 3DGS and uncovers a core limitation of current approaches: the optimized Gaussians are overly-entangled with one another to aggressively fit the training views, which leads to a neglect of the real appearance distribution of the underlying scene and results in appearance artifacts in novel views. The analysis is based on a proposed metric, termed Co-Adaptation Score (CA), which quantifies the entanglement among Gaussians, i.e., co-adaptation, by computing the pixel-wise variance across multiple renderings of the same viewpoint, with different random subsets of Gaussians. The analysis reveals that the degree of co-adaptation is naturally alleviated as the number of training views increases. Based on the analysis, we propose two lightweight strategies to explicitly mitigate the co-adaptation in sparse-view 3DGS: (1) random gaussian dropout; (2) multiplicative noise injection to the opacity. Both strategies are designed to be plug-and-play, and their effectiveness is validated across various methods and benchmarks. We hope that our insights into the co-adaptation effect will inspire the community to achieve a more comprehensive understanding of sparse-view 3DGS.

❓ Why Color Artifacts in Sparse-View 3DGS?

(or Why Dropout Works in Sparse-View 3DGS?) (or Why Noise Injection Works in Sparse-View 3DGS?)

Visualization

Visualization of 3DGS behaviors under different levels of co-adaptation.

  • Thin gray arrows β†’ training views
  • βœ… ❌ Bold arrows β†’ novel view
  • βœ… Green arrow β†’ correct color prediction
  • ❌ Red arrow β†’ color errors

πŸ› οΈ Setup

Installation

Clone Co-Adaptation-of-3DGS

git clone --recursive https://github.com/chenkangjie1123/Co-Adaptation-of-3DGS.git

Setup Anaconda Environment

conda create -n coadaptation3dgs python=3.10
conda activate coadaptation3dgs
pip install -r requirements.txt
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn

Dataset

  • Download the processed datasets: LLFF and DTU
  • Download the NeRF Synthetic dataset from here

Checkpoints

Binocular3DGS use the pre-trained PDCNet+ to generate dense initialization point clouds. The pre-trained PDCNet+ model can be downloaded here.

Put the pre-trained model in submodules/dense_matcher/pre_trained_models

⚑️ Training and Evaluation

LLFF dataset

python script/run_llff.py

DTU dataset

python script/run_dtu.py

NeRF Synthetic dataset (Blender)

When training on the Blender dataset, the evaluation metrics vary significantly between using a white background and a black background. In the paper, we adopt the white background setting while using a black background here.

python script/run_blender.py

πŸ”§ Integration into Your Project

Our strategy is designed for sparse-view settings and is fully compatible with the 3D Gaussian Splatting (3DGS) framework. It can be seamlessly integrated into your own 3DGS-based project with only minimal changes.

To integrate 🎲 Dropout Regularization, simply add the following lines to your ./gaussian_renderer/__init__.py file in the 3DGS framework:

# init random dropout mask
if dropout_factor > 0.0 and train:
    dropout_mask = torch.rand(pc.get_opacity.shape[0], device=pc.get_opacity.device).cuda()
    dropout_mask = dropout_mask < (1 - dropout_factor)

# randomly dropout 3DGS points during training
if dropout_factor > 0.0 and train:
    means3D   = means3D[dropout_mask]
    means2D   = means2D[dropout_mask]
    shs       = shs[dropout_mask]
    opacity   = opacity[dropout_mask]
    scales    = scales[dropout_mask]
    rotations = rotations[dropout_mask]
elif not train:
    # scale opacity for test stage rendering
    opacity *= 1 - dropout_factor

To integrate 🌫️ Opacity Noise Injection, simply add the following lines:

# add noise to opacity during training
if train and sigma_noise > 0.0:
    # sigma_noise = 0.8  # example value
    epsilon_opacity = torch.randn_like(opacity, device=opacity.device) * sigma_noise
    epsilon_opacity = torch.clamp(epsilon_opacity, min=-sigma_noise, max=sigma_noise)
    opacity = torch.clamp(opacity * (1.0 + epsilon_opacity), min=0.0, max=1.0)

πŸ’‘ Note: The optimal parameter setting of Opacity Noise Injection may vary across different baselines, configurations, scenes, and dataset types. Therefore, we generally recommend using Dropout Regularization, which provides more robust and stable parameter choices in practice.

πŸ“– Citation

If you find our work helpful, please ⭐ our repository and cite:

@article{chen2025quantifying,
  title={Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting},
  author={Chen, Kangjie and Zhong, Yingji and Li, Zhihao and Lin, Jiaqi and Chen, Youyu and Qin, Minghan and Wang, Haoqian},
  journal={arXiv preprint arXiv:2508.12720},
  year={2025}
}

πŸ”„ Concurrent Works

There are two other concurrent works that also use dropout to boost sparse-view 3DGS:

They attribute the effectiveness of dropout to empirical factors β€” such as reducing overfitting through fewer active splats (DropoutGS), or enhancing gradient flow to distant Gaussians (DropGaussian).

We respect these insights and are pleased that several works highlight the benefits of dropout in sparse-view 3DGS. Our work complements these findings by offering a deeper analysis of co-adaptation, with the goal of stimulating broader discussion on more generalizable 3D representations.

About

[NeurIPS 2025] Official implementation of the paper "Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting."

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages