Hao Zhu1 Buyu Li4 Feihu Zhang2 Xun Cao1 Yao Yao1†
1Nanjing University 2DreamTech 3HKU 4OriginArk
(* Equal Contribution, † Corresponding Author)
Seamless texturing and precise segmentation with TEXTRIX. Our model generates geometrically aligned textures and segmentations from a single-view input, avoiding the inter-view inconsistencies that commonly affect prevailing 3D generation.
- [2025/12/05] 🚀 We have released the paper and project page!
- [Coming Soon] 🔨 Code and model weights will be released soon. Please stay tuned!
Prevailing 3D texture generation methods, which often rely on multi-view fusion, are frequently hindered by inter-view inconsistencies and incomplete coverage of complex surfaces, limiting the fidelity and completeness of the generated content.
To overcome these challenges, we introduce TEXTRIX, a native 3D attribute generation framework for high-fidelity texture synthesis and downstream applications such as precise 3D part segmentation. Our approach constructs a latent 3D attribute grid and leverages a Diffusion Transformer equipped with sparse attention, enabling direct coloring of 3D models in volumetric space and fundamentally avoiding the limitations of multi-view fusion.
Built upon this native representation, the framework naturally extends to high-precision 3D segmentation by training the same architecture to predict semantic attributes on the grid. Extensive experiments demonstrate state-of-the-art performance on both tasks, producing seamless, high-fidelity textures and accurate 3D part segmentation with precise boundaries.
If you find our work useful for your research, please consider citing our paper:
@article{zeng2025textrix,
title={TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond},
author={Yifei Zeng and Yajie Bao and Jiachen Qian and Shuang Wu and Youtian Lin and Hao Zhu and Buyu Li and Feihu Zhang and Xun Cao and Yao Yao},
journal={arXiv preprint arXiv:2512.02993},
year={2025}
}