Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about validate and inference #16

Open
Huntersxsx opened this issue Mar 14, 2023 · 0 comments
Open

about validate and inference #16

Huntersxsx opened this issue Mar 14, 2023 · 0 comments

Comments

@Huntersxsx
Copy link

Hello, I notice that you use validate function during training while using inference function during testing, and the calculated results of these two functions for the same checkpoint are different.
The main difference between these two functions' implementation is that 'validate function' calculates IoU among a batch while 'inference function' calculates IoU among different sentences regarding the same image. However, I find that you use mean IoU as the evaluation metric, which is the average IoU across all test samples. I am confused that why this difference will lead to different mIoUs?
Is there something I misunderstand or neglect?
Why use two different functions during validation and inference?
Looking for your reply~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant