Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the code for Inference? #74

Open
zeinebBC opened this issue Jan 10, 2024 · 9 comments
Open

How to use the code for Inference? #74

zeinebBC opened this issue Jan 10, 2024 · 9 comments

Comments

@zeinebBC
Copy link

I'm seeking clarity on utilizing the code during inference for testing a fine-tuned model on a dataset without target masks. Is there any guidance provided in the associated paper or repository on how to perform this task effectively? What prompting techniques could I employ when I don't have information regarding the target masks' locations? How can I evaluate the accuracy of the predicted masks in the absence of target masks?

@FJGEODEV
Copy link

+1, I was trying to use val.py, but no luck. May need author's help.

@WuJunde
Copy link
Collaborator

WuJunde commented Feb 26, 2024

  1. you cannot evaluate the prediction accuracy without target masks (to my understanding, Ground-Truth). 2. SAM is an interactive model, so it is a common assumption that the user would provide a prompt for each image (like a click on the target object or sth). In the code, we generate this prompt from target mask instead to simulate the user-given prompt. If you have neither user-given prompt nor target-mask-generated prompt, you may want to try the "segment everything" setting described in SAM paper. It is basically click-prompted the original image in grid, and pick the top-k high-confidence predicted objects of the model. For using it, you need to train the adapters under this setting.

@janexue001
Copy link

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

@WuJunde
Copy link
Collaborator

WuJunde commented Mar 8, 2024

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

@janexue001
Copy link

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

Thank you for your reply. The details of the training process can indeed be seen in logs. However, besides that, I want to see the visual segmentation results performed with the trained model.

@janexue001
Copy link

janexue001 commented Mar 15, 2024 via email

@Part-Work
Copy link

How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

@visionbike
Copy link

Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results. In addition, I would like to ask you one more question. I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam. All the best to you

-----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74) According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

You need to modify some part related to num_multimask_output in EfficientSAM following SAM's code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants