You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking at your description for how to train the fusion model in the supplemental:
Finally, we load the best checkpoint and finetune only the cell for another 25K iterations with a learning rate of 5e−5 while warping the hidden states with the predicted depth maps.
The current training script at fusionnet/run-training.py doesn't have a flag for this. I can see that the GT depth is used for warping the current state at line 249.
What should I use as a depth estimator for this step? Should I borrow from this line at fusionnet/run-testing.py? Or (more likely) this differentiable estimator at line 157 in utils.py?
Thanks.
The text was updated successfully, but these errors were encountered:
Sorry for not having that part in the repository. Both should work and give similar results since training is done with square shaped images and gradient flow disabled. As far as I can remember, non-differentiable function was having issues with the GPU memory space on GTX 1080Ti for some reason during training (which is already quite maxed out with the current batch and sub-sequence sizes). Therefore,
You can borrow the lines 87, 88, [179, 191], 201, 202 from fusionnet/run-testing.py.
Replace the non-differentiable function with the differentiable function just because of potential memory issues. Don't forget to .detach() the prediction tensor while assigning it to the previous_depth variable to disable the gradient flow.
Set the image size function parameters correctly to Config.training_image_width and Config.training_image_height
Finetune only the LSTMFusion module
I can not test this right now, so please let me know how it goes or if you need more info.
Hello,
I'm looking at your description for how to train the fusion model in the supplemental:
The current training script at
fusionnet/run-training.py
doesn't have a flag for this. I can see that the GT depth is used for warping the current state at line 249.What should I use as a depth estimator for this step? Should I borrow from this line at
fusionnet/run-testing.py
? Or (more likely) this differentiable estimator at line 157 inutils.py
?Thanks.
The text was updated successfully, but these errors were encountered: