Official respository for Array Camera Image Fusion using Physics-Aware Transformers, arXiv:2207.02250.
- document the repository in details and add comments
Method 1: Open put-together.py in Blender 2.92.0, change paths to the local machine, and run the script to generate the dataset.
Method 2: Download our dataset here (powered by UA ReDATA)
Then run trainingDataSynthesis/test/gen_patches.py to generate patches.
under PAT/requirements.txt. The enviorment is exported from pytorch/nvidia/20.01 docker on PUMA nodes of UA HPC.
pytorch train.py --trainset_dir [path to your training patches] --validset_dir [path to your validation patches]OR
pytorch train_4inputs.py --trainset_dir [path to your training patches] --validset_dir [path to your validation patches]pytorch demo_test.py --model_dir log_2inputsOR
pytorch demo_test_4inputs.py --model_dir log_4inputsUse Inference_as_a_whole_pittsburgh.ipynb or Inference_as_a_whole.ipynb
Some code is borrowed from https://github.com/The-Learning-And-Vision-Atelier-LAVA/PASSRnet.