Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Colab notebooks update #103

Merged
merged 35 commits into from
Dec 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
4dab4ea
Fix use of deprecated arg in colab training
C-Achard Dec 17, 2024
981b330
Refactor model save name path + comment wandb cell
C-Achard Dec 21, 2024
c201a0e
Update Colab_WNet3D_training.ipynb
C-Achard Dec 21, 2024
c199b5f
Improve logging in Colab
C-Achard Dec 21, 2024
6a42d26
Subclass WnetTraininWorker to avoid duplication
C-Achard Dec 21, 2024
6ecb2fc
Remove strict channel first
C-Achard Dec 21, 2024
b7aa88b
Add missing channel_dim, remove strict_check=False
C-Achard Dec 21, 2024
a76037c
Update worker_training.py
C-Achard Dec 21, 2024
f722137
Update worker_training.py
C-Achard Dec 21, 2024
1565731
Disable strict checks for channelfirstd
C-Achard Dec 21, 2024
85b0640
Update worker_training.py
C-Achard Dec 21, 2024
766ceaa
Temp disable channel first
C-Achard Dec 21, 2024
e3286ce
Fix init of Colab worker
C-Achard Dec 21, 2024
b647d98
Move issues with transforms to colab script + disable pad/channelfirst
C-Achard Dec 21, 2024
0e69ee4
Enable ChannelFirst again
C-Achard Dec 21, 2024
788903e
Remove strict_check = False in original worker
C-Achard Dec 21, 2024
d00d1b6
Remove redundant code + Colab notebook tweaks
C-Achard Dec 21, 2024
b42df9d
Revert wandb check
C-Achard Dec 21, 2024
a5acd55
Update docs + Colab inference
C-Achard Dec 21, 2024
cd99114
Merge branch 'main' into cy/colab-fixes
MMathisLab Dec 22, 2024
3f56748
Update training_wnet.rst
MMathisLab Dec 22, 2024
c0bf701
Update Colab_WNet3D_training.ipynb
MMathisLab Dec 22, 2024
8b29774
update / WIP
MMathisLab Dec 22, 2024
63cbe95
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
eae5f15
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
a9e8760
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
2a039e1
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
9684ff6
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
5eeac6f
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
af44b04
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
3d65785
Update Colab_inference_demo.ipynb
MMathisLab Dec 22, 2024
bfa1496
nearly final!
MMathisLab Dec 22, 2024
262a9ef
exec
MMathisLab Dec 22, 2024
d7f318d
Merge branch 'main' into cy/colab-fixes
MMathisLab Dec 22, 2024
1fd7b3b
final
MMathisLab Dec 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 9 additions & 12 deletions docs/source/guides/training_wnet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,24 +18,21 @@ The WNet3D **does not require a large amount of data to train**, but **choosing

You may find below some guidelines, based on our own data and testing.

The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.

The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.
The WNet3D is a self-supervised learning approach for 3D cell segmentation, and relies on the assumption that structural and morphological features of cells can be inferred directly from unlabeled data. This involves leveraging inherent properties such as spatial coherence and local contrast in imaging volumes to distinguish cellular structures. This approach assumes that meaningful representations of cellular boundaries and nuclei can emerge solely from raw 3D volumes. Thus, we strongly recommend that you use WNet3D on stacks that have clear foreground/background segregation and limited noise. Even if your final samples have noise, it is best to train on data that is as clean as you can.


.. important::
For optimal performance, the following should be avoided for training:

- Images with very large, bright regions
- Almost-empty and empty images
- Images with large empty regions or "holes"
- Images with over-exposed pixels/artifacts you do not want to be learned!
- Almost-empty and/or fully empty images, especially if noise is present (it will learn to segment very small objects!).

However, the model may be accomodate:
However, the model may accomodate:

- Uneven brightness distribution
- Varied object shapes and radius
- Noisy images
- Uneven illumination across the image
- Uneven brightness distribution in your image!
- Varied object shapes and radius!
- Noisy images (as long as resolution is sufficient and boundaries are clear)!
- Uneven illumination across the image!

For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.

Expand Down Expand Up @@ -88,7 +85,7 @@ Common issues troubleshooting
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.


- **The NCuts loss "explodes" after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.
- **The NCuts loss "explodes" upward after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.

- **Reconstruction (decoder) performance is poor** : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.

Expand Down
20 changes: 14 additions & 6 deletions napari_cellseg3d/code_models/worker_training.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
"""Contains the workers used to train the models."""

import platform
import time
from abc import abstractmethod
Expand Down Expand Up @@ -200,7 +201,10 @@ def get_patch_dataset(self, train_transforms):
patch_func = Compose(
[
LoadImaged(keys=["image"], image_only=True),
EnsureChannelFirstd(keys=["image"], channel_dim="no_channel"),
EnsureChannelFirstd(
keys=["image"],
channel_dim="no_channel",
),
RandSpatialCropSamplesd(
keys=["image"],
roi_size=(
Expand Down Expand Up @@ -235,7 +239,8 @@ def get_dataset_eval(self, eval_dataset_dict):
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(
keys=["image", "label"], channel_dim="no_channel"
keys=["image", "label"],
channel_dim="no_channel",
),
# RandSpatialCropSamplesd(
# keys=["image", "label"],
Expand Down Expand Up @@ -280,7 +285,10 @@ def get_dataset(self, train_transforms):
load_single_images = Compose(
[
LoadImaged(keys=["image"]),
EnsureChannelFirstd(keys=["image"]),
EnsureChannelFirstd(
keys=["image"],
channel_dim="no_channel",
),
Orientationd(keys=["image"], axcodes="PLI"),
SpatialPadd(
keys=["image"],
Expand Down Expand Up @@ -1345,9 +1353,9 @@ def get_patch_loader_func(num_samples):
)
sample_loader_eval = get_patch_loader_func(num_val_samples)
else:
num_train_samples = (
num_val_samples
) = self.config.num_samples
num_train_samples = num_val_samples = (
self.config.num_samples
)

sample_loader_train = get_patch_loader_func(
num_train_samples
Expand Down
Loading
Loading