The proposed method is implemented in PyTorch.
- Python 3.7
- PyTorch (version = 1.9.1)
- numpy
- scipy
- CUDA-9.0
- Pyvista
- Matplotlib
- opencv-python
We provide:
- Datasets and results:
- Real-captured near-light image data
- Synthetic near-light image data with ground truth surface normal and depth
- Estimation results from existing methods and ours shown in the main paper
- Implementing of our method
modules.py
: Network structure of our neural surfaceloss_functions.py
: Reconstruction loss with albedo depending on the surface normal and depth
- Code to reproduce the experimental results shown in the paper
demo.py
-
Download the data and result into the
data
folder and unzip -
Check the data and released results from existing methods and ours, e.g.
- synthetic_data
- Buddha
- render_img: record rendered image data
- render_para: record GT surface normal, depth, 3D mesh, point light positions and radiant parameters
- Released_Result: save recovered surface normal and depth, reconstructed 3D mesh
- Buddha
- synthetic_data
-
Reproduce the experimental results shown in the paper
python demo.py
The shape estimation results from our method will be saved at
./data/synthetic_data/objectname/Result_Ours/datetime_TPAMI_submit_experimentname/Recoverd_Shapes
./data/real_data/objectname/Result_Ours/datetime_TPAMI_submit_experimentname/Recoverd_Shapes
The network structure follows SIREN network.