From d5f4b5fbf71c8ac74ad0b9a91f6d7d3782d20b03 Mon Sep 17 00:00:00 2001 From: Mehdi Seifi Date: Wed, 8 Jan 2025 17:48:26 +0100 Subject: [PATCH] docs ducks daks --- README.md | 44 ++++++---------------------------- docs/howto.md | 4 ++-- docs/install.md | 11 +++++---- docs/segmentation.md | 48 ++++++++++++++++++++++++++++++++++++-- docs/stylesheets/extra.css | 5 ++++ 5 files changed, 67 insertions(+), 45 deletions(-) diff --git a/README.md b/README.md index 8603916..b9d67d1 100644 --- a/README.md +++ b/README.md @@ -13,10 +13,14 @@ We developed a *napari* plugin to train a *Random Forest* model using extracted ---------------------------------- ## Documentation -The plugin documentation is [here](docs/index.md). +You can check the documentation [here](https://juglab.github.io/featureforest/) (⚠️ work in progress!). ## Installation -To install this plugin you need to use [conda] or [mamba] to create a environment and install the requirements. Use the commands below to create the environment and install the plugin: +To install this plugin you need to use [conda] or [mamba] to create an environment and install the requirements. Use commands below to create the environment and install the plugin: +```bash +git clone https://github.com/juglab/featureforest +cd ./featureforest +``` ```bash # for GPU conda env create -f ./env_gpu.yml @@ -26,41 +30,7 @@ conda env create -f ./env_gpu.yml conda env create -f ./env_cpu.yml ``` -#### Note: You need to install `sam-2` which can be installed easily using conda. To install `sam-2` using `pip` please refer to the official [sam-2](https://github.com/facebookresearch/sam2) repository. - -### Requirements -- `python >= 3.10` -- `numpy==1.24.4` -- `opencv-python` -- `scikit-learn` -- `scikit-image` -- `matplotlib` -- `pyqt` -- `magicgui` -- `qtpy` -- `napari` -- `h5py` -- `pytorch=2.3.1` -- `torchvision=0.18.1` -- `timm=1.0.9` -- `pynrrd` -- `segment-anything` -- `sam-2` - -If you want to install the plugin manually using GPU, please follow the pytorch installation instruction [here](https://pytorch.org/get-started/locally/). -For detailed napari installation see [here](https://napari.org/stable/tutorials/fundamentals/installation). - -### Installing The Plugin -If you use the provided conda environment yaml files, the plugin will be installed automatically. But in case you already have the environment setup, -you can just install the plugin. First clone the repository: -```bash -git clone https://github.com/juglab/featureforest -``` -Then run the following commands: -```bash -cd ./featureforest -pip install . -``` +For more detailed installation guide, check out [here](https://juglab.github.io/featureforest/install/). ## License diff --git a/docs/howto.md b/docs/howto.md index 00ead7b..c5844dc 100644 --- a/docs/howto.md +++ b/docs/howto.md @@ -16,5 +16,5 @@ As for the first step, we recommend making a small sub-stack to train a Random F After the training, you can save the RF model, and later apply it on the entire stack. ## Divide And Conquer -Extracted features saved as an `HDF5` file can take a very large space on disk. In this method, to prevent the disk space overflow, you can divide your large stack into several sub-stacks. Then use the plugin for each, separately. -Although, you can try one trained model over another sub-stack, Random Forest model can not be fine-tuned. By using this method, you can achieve better annotations with the expense of spending more time on training several models. +Extracted features saved as an `HDF5` file can take up a huge space on the disk. In this method, to prevent disk space overflow, you can divide your large stack into several sub-stacks. Then use the plugin for each, separately. +Although, you can try one trained model over another sub-stack, Random Forest model can not be fine-tuned. By using this method, you can achieve better annotations at the expense of spending more time on training several models. diff --git a/docs/install.md b/docs/install.md index fe27d73..7baa8ca 100644 --- a/docs/install.md +++ b/docs/install.md @@ -1,5 +1,9 @@ ## Easy Way! -To install this plugin you need to use [mamba] or [conda] to create a environment and install the requirements. Use the commands below to create the environment and install the plugin: +To install this plugin you need to use [mamba] or [conda] to create a environment and install the requirements. Use commands below to create the environment and install the plugin: +```bash +git clone https://github.com/juglab/featureforest +cd ./featureforest +``` ```bash # for GPU mamba env create -f ./env_gpu.yml @@ -31,9 +35,6 @@ You need to install `sam-2` which can be installed easily using mamba (or conda) - `segment-anything` - `sam-2` -If you want to install the plugin manually using GPU, please follow the pytorch installation instruction [here](https://pytorch.org/get-started/locally/). -For detailed napari installation see [here](https://napari.org/stable/tutorials/fundamentals/installation). - ## Installing Only The Plugin If you use the provided conda environment yaml files, the plugin will be installed automatically. But in case you already have the environment setup, you can just install the plugin. First clone the repository: @@ -51,6 +52,8 @@ There is also a [pypi package](https://pypi.org/project/featureforest/) availabl pip install featureforest ``` +If you want to install the plugin manually using GPU, please follow the pytorch installation instruction [here](https://pytorch.org/get-started/locally/). For detailed napari installation see [here](https://napari.org/stable/tutorials/fundamentals/installation). + [conda]: https://conda.io/projects/conda/en/latest/index.html [mamba]: https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html \ No newline at end of file diff --git a/docs/segmentation.md b/docs/segmentation.md index cd23f49..e71f723 100644 --- a/docs/segmentation.md +++ b/docs/segmentation.md @@ -6,7 +6,7 @@ The Segmentation widget is a long widget with several panels, but don't worry we ### Inputs 1. **Input Layer**: To set which napari layer is your input image layer 2. **Feature Storage**: Select your previously extracted features `HDF5` file here. - ***Note***: You need to select the storage file for this particular input image, obviously! + ***Note***: You need to select the storage file for the selected input image, obviously! 3. **Ground Truth Layer**: To select your *Labels* layer 4. **Add Layer** button: To add a new GT layer to napari layers @@ -17,5 +17,49 @@ The Segmentation widget is a long widget with several panels, but don't worry we - You can have as many *Labels* layer as you want. But **only the selected** one will be used for training the RF model. - You can also drag & drop your previously saved labels into the napari and select that layer. + ## Train Model -![Inputs](assets/segmentation_widget/seg_2.png){width="360" align=left} +![Inputs](assets/segmentation_widget/seg_2.png){width="360" align=right} +### Train Model (Random Forest) +1. **Number of Trees**: To set number of trees (estimators) in the forest +2. **Max depth**: The maximum depth of a tree +3. **Train** button: To extract the training data and train the **RF** model +4. **Load Model** button: Using this, you can load a previously trained and saved model. +5. **Save Model** button: To save the current RF model + +!!! tip + - Setting a high value for the `Max depth` would overfit your **RF** model over the training data. So, it won't perform well on test images. + But if you're doing the segmentation over the entire stack (or a single image), you may try higher values. + +
+ +## Prediction +![Inputs](assets/segmentation_widget/seg_3.png){width="360" align=right} +### Prediction +###### Segmentation Layer: +1. **New Layer**: If checked, the segmentation result will show up on a new layer in napari +2. **Layer Dropdown**: You can select which layer should be used as the layer for the segmentation result +3. **Add/Replace Segmentation** option: Based on your choice, this will add new segmentation to the previous result, or completely replace the result (Default). +###### Buttons: +4. **Predict Slice** button: To generate the segmentation mask for the *current* slice +5. **Predict Whole Stack** button: to start the prediction process for the whole loaded stack +6. **Stop** button: Just for your safety!😉 this will stop the prediction process. + + +## Post-processing +![Inputs](assets/segmentation_widget/seg_4.png){width="360"} + +- + + +## Export +![Inputs](assets/segmentation_widget/seg_5.png){width="360"} + +- + + +## Run Prediction Pipeline +![Inputs](assets/segmentation_widget/seg_6.png){width="360"} + +- + diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css index 7f7db90..b21512b 100644 --- a/docs/stylesheets/extra.css +++ b/docs/stylesheets/extra.css @@ -18,4 +18,9 @@ .admonition>:last-child, html .md-typeset details>:last-child { font-size: 0.72rem; +} + + +.clear { + clear: both; } \ No newline at end of file