From 779b303ae2630d82ced8844327913e68629ffeb2 Mon Sep 17 00:00:00 2001 From: Crackodu91 Date: Tue, 7 Jan 2025 17:54:10 +0100 Subject: [PATCH 1/5] Update guide-qupath-objects.md fix typo --- docs/guide-qupath-objects.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guide-qupath-objects.md b/docs/guide-qupath-objects.md index f3e071b..1e23c75 100644 --- a/docs/guide-qupath-objects.md +++ b/docs/guide-qupath-objects.md @@ -108,7 +108,7 @@ You will first need to export those with the `exportPixelClassifierProbabilities Then the segmentation script can : -+ find punctal objects as polygons (with a shape) or points (punctal) than can be counted. ++ find punctual objects as polygons (with a shape) or points (punctual) that can be counted. + trace fibers with skeletonization to create lines whose lengths can be measured. Several parameters have to be specified by the user, see the segmentation script [API reference](api-script-segment.md). This script will generate [GeoJson](tips-formats.md#json-and-geojson-files) files that can be imported back to QuPath with the `importGeojsonFiles.groovy` script. @@ -140,4 +140,4 @@ QuPath extension : [https://github.com/ksugar/qupath-extension-sam](https://gith Original repositories : [samapi](https://github.com/ksugar/samapi), [SAM](https://github.com/facebookresearch/segment-anything) Reference papers : [doi:10.1101/2023.06.13.544786](https://doi.org/10.1101/2023.06.13.544786), [doi:10.48550/arXiv.2304.02643](https://doi.org/10.48550/arXiv.2304.02643) -This is more an interactive annotation tool than a fully automatic segmentation algorithm. \ No newline at end of file +This is more an interactive annotation tool than a fully automatic segmentation algorithm. From 4d0e7703a4df88d23502372303bf903b7d17ad72 Mon Sep 17 00:00:00 2001 From: Crackodu91 Date: Tue, 7 Jan 2025 17:58:22 +0100 Subject: [PATCH 2/5] Update guide-qupath-objects.md fix typo --- docs/guide-qupath-objects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guide-qupath-objects.md b/docs/guide-qupath-objects.md index 1e23c75..81bb864 100644 --- a/docs/guide-qupath-objects.md +++ b/docs/guide-qupath-objects.md @@ -75,7 +75,7 @@ First and foremost, you should use a QuPath project dedicated to the training of 6. In `Advanced settings`, check `Reweight samples` to help make sure a classification is not over-represented. 7. Modify the different parameters : + `Classifier` : typically, `RTrees` or `ANN_MLP`. This can be changed dynamically afterwards to see which works best for you. - + `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll the full resolution, for big objects reducing the resolution will be faster. + + `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll want the full resolution, for big objects reducing the resolution will be faster. + `Features` : this is the core of the process -- where you choose the filters. In `Edit`, you'll need to choose : - The fluorescence channels - The scales, eg. the size of the filters applied to the image. The bigger, the coarser the filter is. Again, this will depend on the size of the objects you want to segment. From cfb57eb9f234625d5feba6b24009f5fdeb380406 Mon Sep 17 00:00:00 2001 From: Crackodu91 Date: Tue, 7 Jan 2025 18:05:04 +0100 Subject: [PATCH 3/5] Update guide-qupath-objects.md fix huge mistake --- docs/guide-qupath-objects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guide-qupath-objects.md b/docs/guide-qupath-objects.md index 81bb864..75c9cfa 100644 --- a/docs/guide-qupath-objects.md +++ b/docs/guide-qupath-objects.md @@ -48,7 +48,7 @@ Then, choose the following options : ## Detect objects ### Built-in cell detection -QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You hava a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html). +QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You have a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html). Briefly, this uses a watershed algorithm to find bright spots and can perform a cell expansion to estimate the full cell shape based on the detected nuclei. Therefore, this works best to segment nuclei but one can expect good performance for cells as well, depending on the imaging and staining conditions. From 047b9f60e569a815e00ffc27d5718a2b06b07523 Mon Sep 17 00:00:00 2001 From: Crackodu91 Date: Tue, 7 Jan 2025 18:06:31 +0100 Subject: [PATCH 4/5] Update guide-qupath-objects.md --- docs/guide-qupath-objects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guide-qupath-objects.md b/docs/guide-qupath-objects.md index 75c9cfa..30f0c9a 100644 --- a/docs/guide-qupath-objects.md +++ b/docs/guide-qupath-objects.md @@ -56,7 +56,7 @@ Briefly, this uses a watershed algorithm to find bright spots and can perform a In `scripts/qupath-utils/segmentation`, there is `watershedDetectionFilters.groovy` which uses this feature from a script. It further allows you to filter out detected cells based on shape measurements as well as fluorescence itensity in several channels and cell compartments. ### Pixel classifier -Another very powerful and versatile way to segment cells if through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides. +Another very powerful and versatile way to segment cells is through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides. The general idea is to train a model to classify every pixel as a signal or as background. You can find good resources on how to procede in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html) and some additionnal tips and tutorials on Michael Neslon's blog ([here](https://www.imagescientist.com/mpx-pixelclassifier) and [here](https://www.imagescientist.com/brightfield-4-pixel-classifier)). From 1cf5cfb2f0cd85503434bfb63c95483ca7175d1d Mon Sep 17 00:00:00 2001 From: Crackodu91 Date: Tue, 7 Jan 2025 18:07:51 +0100 Subject: [PATCH 5/5] Update guide-qupath-objects.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pttre que tu pourrais rajouter une petite rubrique concernant le classify du pixel calassifier et la possibilité de changer le ignore en classe négative ? --- docs/guide-qupath-objects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guide-qupath-objects.md b/docs/guide-qupath-objects.md index 30f0c9a..77796ab 100644 --- a/docs/guide-qupath-objects.md +++ b/docs/guide-qupath-objects.md @@ -69,7 +69,7 @@ First and foremost, you should use a QuPath project dedicated to the training of 1. You should choose some images from different animals, with different imaging conditions (staining efficiency and LED intensity) in different regions (eg. with different objects' shape, size, sparsity...). The goal is to get the most diversity of objects you could encounter in your experiments. 10 images is more than enough ! 2. Import those images to the new, dedicated QuPath project. -3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background. +3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background. 4. Head to `Classify > Pixel classification > Train pixel classifier`, and turn on `Live prediction`. 5. Load all your images in `Load training`. 6. In `Advanced settings`, check `Reweight samples` to help make sure a classification is not over-represented.