Skip to content

zongh5a/MDF-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-distribution fitting for multi-view stereo

network_4scales

Training

  1. Prepare DTU training set(640x512) and BlendedMVS dataset(768x576).
  2. Edit config.py and set "DatasetsArgs.root_dir", "LoadDTU.train_root&train_pair", and "LoadBlendedMVS.train_root".
  3. Run the script for training.
# DTU
python train.py -d dtu 
# BlendedMVS
python train.py -d blendedmvs

Testing

The Pre-training model in "pth".

  1. Prepare DTU test set(1600x1200)(Baidu Netdisk code:6au3) and Tanks and Temples dataset(Baidu Netdisk code:a4oz).
  2. Edit config.py and set "DatasetsArgs.root_dir", "LoadDTU.eval_root&eval_pair", and "LoadTanks.eval_root"
  3. Run the script for the test.
# DTU
python eval.py -p pth/dtu_29.pth.pth -d dtu
# Tanks and Temples
python eval.py -p pth/blendedmvs_29.pth -d tanks

Fusion

There three methods in "tools": "filter", "gipuma", and "pcd".

DTU dataset

  1. Install fusibile tools: tools/fusibile or https://github.com/kysucix/fusibile
  2. Edit tools/gipuma/conf.py and set "root_dir", "eval_folder" and "fusibile_exe_path".
  3. Run the script.
cd tools/gipuma
python fusion.py -cfmgd

Tanks and Temples dataset

  1. Run the script.
# filter(main method)
cd tools/filter
python dynamic_filter_gpu.py -e EVAL_OUTPUT_LOCATION -r DATASET_PATH -o OUTPUT_PATH 
# pcd
cd tools/pcd
chmod +x ninja_init.sh
source ninja_init.sh
python fusion.py -e EVAL_OUTPUT_LOCATION -r DATASET_PATH -o OUTPUT_PATH 

Results (single Quadro RTX 5000)

DTU dataset

Acc(mm) Comp(mm) Overall(mm) Time(s/view) Memory(M)
MDFNet(4scale) 0.349 0.303 0.326 0.376 4396

The mean F-score on Tanks and Temples dataset

intermediate advanced
MDFNet(4scale) 56.18 34.70
MDFNet(3scale) 60.24 37.31

example Family

Acknowledgements

Our work is partially baed on these opening source work: MVSNet, MVSNet-pytorch, D2HC-RMVSNet, pcd-fusion. We appreciate their contributions to the MVS community.

Citation

This work will be published in Machine Vision and Applications.