Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train hovernet starting with semantic-level mask image? #339

Open
OmarAshkar opened this issue Dec 1, 2022 · 3 comments
Open

How to train hovernet starting with semantic-level mask image? #339

OmarAshkar opened this issue Dec 1, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@OmarAshkar
Copy link

Is your feature request related to a problem? Please describe.
I have large WSI data and multi-class jpeg masks, but I am so tired to find a solution to make them work with any hovernet implementation.

Describe the solution you'd like
I'd like to be able to feed my large WSI data along with the jpeg masks, and tiling and training then took place.

Describe alternatives you've considered
If I still need instance masks, I can do that I do this in watershed. But still don't know what format PathML would want (e.g npy, mat, json, jpeg ..etc

Additional context

Any help is highly appreciated.

Thanks!

@OmarAshkar OmarAshkar added the enhancement New feature or request label Dec 1, 2022
@jacob-rosenthal
Copy link
Collaborator

You can provide masks alongside the WSI when intializing a SlideData object. Just load the masks into numpy arrays and pass a dictionary of masks where each item is a (key, mask) pair. Masks should be the same height and width as the WSI image. Then, when tiles are generated, each tile will also have the corresponding mask. Hope this helps point in the right direction

@OmarAshkar
Copy link
Author

@jacob-rosenthal Thank you. Just to follow, SlideData has attributes mask and labels, which one will take the type labels and which one in the instances? I need to set that for hovernet I believe.

@jacob-rosenthal
Copy link
Collaborator

Labels is meant to hold slide-level metadata, e.g. tissue type. Masks is for pixel-level metadata, e.g. a segmentation mask labeling which class each pixel is in.
For example, if you have a numpy array of a nucleus instance segmentation mask which is named nuclei_mask , you would load it into masks as a dictionary such as wsi = SlideData("/path/to/slide.svs", masks = {"nuclei" : nuclei_mask})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants