-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for QuPath 0.5 and local models #36
Conversation
Local model support implemented, whenever the user directory is changed we scan for directories, and add them based on the folder name. Invalid entries pop up a notification, otherwise models should work as for online models, at least based on my running a modified version of a remote model Disabled checking modification times and checksums because that seems unnecessary for user models. |
- Create WebView using `WebViews` class to standardize styling - Fix empty dialog titles generated in the task created by `WSInferController` - Standardize download & info icon approach in fxml - Fix use of multiple style classes in fxml - Reorder extra pane at bottom (to separate 'processing' options from 'model path' options)
@alanocallaghan can you briefly explain how to use this? Put what files where...? You mention 'whenever the user directory is changed' but I assume that's not referring to QuPath's 'user directory' - right? |
The "user directory" refers to the "User model directory" in the WSInfer extension window |
fantastic! 🚀 |
Fantastic indeed! Can confirm this also works with home-built models and custom |
Great that it works! Question to all: Would it be a good or bad idea to just use one model directory, and not store user models somewhere else? The main reasons for:
Main reason against (that I can think of):
|
Could just have a "user" subdirectory thats checked automatically, and ensure no model named user is ever added to the model zoo |
Personally a single directory is what I was expecting. In fact I had added my models to an Regarding your concern:
If I understood correctly, I don't think this is going to be much of a problem because I would advise anyone with more than 6–8 models to use the text filtering capabilities of that dropdown. |
i think it is a good idea to have just one model directory, because then there will be a single source of truth. this would simplify the user experience imho. if this isn't part of the extension yet, it might be useful to have a button that opens a file browser to the model directory. although i hope users wouldn't fiddle around with the downloaded models, which could break something ;) i wanted to add some info to one point:
just for clarity, the |
Thank you all for your efforts in this direction. Not sure if this feature is done, but I am assuming it is, since it has been merged and a release has been made. I just tried using wsinfer (v0.3) on the the latest qupath (v0.5) with my local models (.pt models, trained on CUDA) on my computer (mac).
In term of directory structure,
Hopefully this feedback is helpful. Please let me know if I'm doing something wrong and there is a way to fix this. |
Hey @vipulnj, indeed the steps for this process have changed since this pull request. The updated process is detailed in #48, basically:
Do let us know if there's any issues with that process |
This is a fantastic feature! Are there any plans to include this on the Qupath site or the Use your own model section of the WSInfer site? Or is the documented elsewhere and I have just missed it? |
it isn't on the qupath site, but it should be. @sandyCarmichael - would you be interested in submitting a pr that adds these steps to the qupath documentation? |
Hi, I am on Mac M1, Qupath 0.5.1, WSInfer 0.3. I tried to add a local model (which I tested on WSInfer command line) but no success. I tried |
The model should definitely be as a sibling of |
Do you see any messages in the log viewer when opening WSInfer with this setup? |
This error is vaguely familiar but I can't remember exactly what caused it before. There are some DJL and Pytorch related issues:
that might help? |
Added a section on running your own models, after discussion at qupath/qupath-extension-wsinfer#36
Thanks, not yet able to solve that, anyway I submitted a pull request for the docs related to the Qupath extension so the basics are documented. |
Thanks! Alternative may be to run the model on CPU for now |
well, not really: also on CPU it does not run, same error. (Zoo models run). |
can you try transferring your model to cpu before exporting it to torchscript? some pseudo-code to illustrate what i am thinking: model.eval()
model.cpu()
# Add other arguments as needed.
model_jit = torch.jit.script(model)
torch.jit.save(model_jit, "torchscript_model.pt") then i think the model should work on cpu or gpu.... fingers crossed 🤞 |
@kaczmarj : thanks, this plus some magic found on Stack Overflow solved the issue (I have yet to consolidate & understand what I did, but now it runs :) ). |
Add a section on running your own models, after discussion at qupath/qupath-extension-wsinfer#36 Co-authored-by: Vincenzo Della Mea @ Medical Informatics & Telemedicine Lab, UNIUD <[email protected]>
Resolves #32