Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different performance on two computer with Diffusion #18

Open
LiuTingWed opened this issue Sep 3, 2023 · 2 comments
Open

Different performance on two computer with Diffusion #18

LiuTingWed opened this issue Sep 3, 2023 · 2 comments

Comments

@LiuTingWed
Copy link

Hello, I think you understand Diffusion better than me, so I want to discuss with you and see if you can solve my problem.
Here is the situation:
when I run this diffusion project for segmentation tasks, I find that: the local host (22080ti pytorch=1.8) loading the checkpoint trained on the server side (2*4090 pytorch=1.9) cannot achieve the same performance as the server (dice 84 vs 81), this problem puzzles me a lot. In order to find the best checkpoint, I follow the approach of using DDIM accelerated sampling to make inferences after training 2 epochs, and then test to get results.
The problem is: during training on the server side, the test metrics can reach 84, but when loading the checkpoint separately on the server side for testing, it is 82. More strangely, when testing on the local host, it is 81. The difference in these metrics makes it hard for me to understand.
I suspect it may be because Diffusion needs to initialize random noise? However, I still encounter this problem even when setting the same random seed. In fact, regarding DDIM, choosing a different iteration batch size each time also leads to slightly different performance, which puzzles me as well.
I look forward to your reply.

@ZJU-PLP
Copy link

ZJU-PLP commented Oct 24, 2023

Hello, I think you understand Diffusion better than me, so I want to discuss with you and see if you can solve my problem. Here is the situation: when I run this diffusion project for segmentation tasks, I find that: the local host (22080ti pytorch=1.8) loading the checkpoint trained on the server side (2*4090 pytorch=1.9) cannot achieve the same performance as the server (dice 84 vs 81), this problem puzzles me a lot. In order to find the best checkpoint, I follow the approach of using DDIM accelerated sampling to make inferences after training 2 epochs, and then test to get results. The problem is: during training on the server side, the test metrics can reach 84, but when loading the checkpoint separately on the server side for testing, it is 82. More strangely, when testing on the local host, it is 81. The difference in these metrics makes it hard for me to understand. I suspect it may be because Diffusion needs to initialize random noise? However, I still encounter this problem even when setting the same random seed. In fact, regarding DDIM, choosing a different iteration batch size each time also leads to slightly different performance, which puzzles me as well. I look forward to your reply.

@LiuTingWed Hi, TingWed.
Could you mind sharing your method of testing process? The published code dose not have the testing script.

@LiuTingWed
Copy link
Author

Sorry for reply too late
https://github.com/LiuTingWed/CriDiff

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants