Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add ZarrTIFFWSIReader class. #897

Draft
wants to merge 2 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion benchmarks/annotation_store.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -502,7 +502,7 @@
"# Part 1: Small Scale Benchmarking of Annotation Storage\n",
"\n",
"Using the already defined data generation functions (`cell_polygon` and\n",
"`cell_grid`), we create some simple artificial cell boundaries by\n",
"`cell_grid`), we create some simple artificial cell boundaries by\n",
"creating a circle of points, adding some noise, scaling to introduce\n",
"eccentricity, and then rotating. We use 20 points per cell, which is a\n",
"reasonably high value for cell annotation. However, this can be\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/02-stain-normalization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
"\n",
"1. Load a sample WSI.\n",
"1. Extract a square patch.\n",
"1. Stain-normalize the tile using various built-in methods.\n",
"1. Stain-normalize the tile using various built-in methods.\n",
"1. Stain-normalize with a user-defined stain matrix.<br/>\n",
"\n",
"During the above steps, we will be using functions from our `stainnorm` module [here](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/tiatoolbox/tools/stainnorm.py). This demo assumes some understanding of the functions in the `wsireader` module (for example by going through the demo [here](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/examples/01-wsi-reading.ipynb)).\n",
Expand Down
1,968 changes: 984 additions & 984 deletions examples/04-patch-extraction.ipynb

Large diffs are not rendered by default.

32 changes: 16 additions & 16 deletions examples/05-patch-prediction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
]
},
"source": [
"**\\[essential\\]** Please install the following package, which is required in this notebook.\n",
"**[essential]** Please install the following package, which is required in this notebook.\n",
"\n"
]
},
Expand Down Expand Up @@ -470,14 +470,14 @@
"As you can see for this patch dataset, we have 9 classes/labels with IDs 0-8 and associated class names. describing the dominant tissue type in the patch:\n",
"\n",
"- BACK ⟶ Background (empty glass region)\n",
"- LYM ⟶ Lymphocytes\n",
"- LYM ⟶ Lymphocytes\n",
"- NORM ⟶ Normal colon mucosa\n",
"- DEB ⟶ Debris\n",
"- MUS ⟶ Smooth muscle\n",
"- STR ⟶ Cancer-associated stroma\n",
"- ADI ⟶ Adipose\n",
"- MUC ⟶ Mucus\n",
"- TUM ⟶ Colorectal adenocarcinoma epithelium\n",
"- DEB ⟶ Debris\n",
"- MUS ⟶ Smooth muscle\n",
"- STR ⟶ Cancer-associated stroma\n",
"- ADI ⟶ Adipose\n",
"- MUC ⟶ Mucus\n",
"- TUM ⟶ Colorectal adenocarcinoma epithelium\n",
"\n",
"It is easy to use this code for your dataset - just ensure that your dataset is arranged like this example (images of different classes are placed into different subfolders), and set the right image extension in the `image_ext` variable.\n",
"\n"
Expand Down Expand Up @@ -532,7 +532,7 @@
"\n",
"- `model`: Use an externally defined PyTorch model for prediction, with weights already loaded. This is useful when you want to use your own pretrained model on your own data. The only constraint is that the input model should follow `tiatoolbox.models.abc.ModelABC` class structure. For more information on this matter, please refer to our [example notebook on advanced model techniques](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/examples/07-advanced-modeling.ipynb).\n",
"- `pretrained_model `: This argument has already been discussed above. With it, you can tell tiatoolbox to use one of its pretrained models for the prediction task. A complete list of pretrained models can be found [here](https://tia-toolbox.readthedocs.io/en/latest/usage.html?highlight=pretrained%20models#tiatoolbox.models.architecture.get_pretrained_model). If both `model` and `pretrained_model` arguments are used, then `pretrained_model` is ignored. In this example, we used `resnet18-kather100K,` which means that the model architecture is an 18 layer ResNet, trained on the Kather100k dataset.\n",
"- `pretrained_weight`: When using a `pretrained_model`, the corresponding pretrained weights will also be downloaded by default. You can override the default with your own set of weights via the `pretrained_weight` argument.\n",
"- `pretrained_weight`: When using a `pretrained_model`, the corresponding pretrained weights will also be downloaded by default. You can override the default with your own set of weights via the `pretrained_weight` argument.\n",
"- `batch_size`: Number of images fed into the model each time. Higher values for this parameter require a larger (GPU) memory capacity.\n",
"\n",
"The second line in the snippet above calls the `predict` method to apply the CNN on the input patches and get the results. Here are some important `predict` input arguments and their descriptions:\n",
Expand All @@ -541,7 +541,7 @@
"- `imgs`: List of inputs. When using `patch` mode, the input must be a list of images OR a list of image file paths, OR a Numpy array corresponding to an image list. However, for the `tile` and `wsi` modes, the `imgs` argument should be a list of paths to the input tiles or WSIs.\n",
"- `return_probabilities`: set to *__True__* to get per class probabilities alongside predicted labels of input patches. If you wish to merge the predictions to generate prediction maps for `tile` or `wsi` modes, you can set `return_probabilities=True`.\n",
"\n",
"In the `patch` prediction mode, the `predict` method returns an output dictionary that contains the `predictions` (predicted labels) and `probabilities` (probability that a certain patch belongs to a certain class).\n",
"In the `patch` prediction mode, the `predict` method returns an output dictionary that contains the `predictions` (predicted labels) and `probabilities` (probability that a certain patch belongs to a certain class).\n",
"\n",
"The cell below uses common python tools to visualize the patch classification results in terms of classification accuracy and confusion matrix.\n",
"\n"
Expand Down Expand Up @@ -732,9 +732,9 @@
"\n",
"- `mode='tile'`: the type of image input. We use `tile` since our input is a large image tile.\n",
"- `imgs`: in tile mode, the input is *required* to be a list of file paths.\n",
"- `save_dir`: Output directory when processing multiple tiles. We explained before why this is necessary when we are working with multiple big tiles.\n",
"- `patch_size`: This parameter sets the size of patches (in \\[W, H\\] format) to be extracted from the input files, and for which labels will be predicted.\n",
"- `stride_size`: The stride (in \\[W, H\\] format) to consider when extracting patches from the tile. Using a stride smaller than the patch size results in overlapping between consecutive patches.\n",
"- `save_dir`: Output directory when processing multiple tiles. We explained before why this is necessary when we are working with multiple big tiles.\n",
"- `patch_size`: This parameter sets the size of patches (in [W, H] format) to be extracted from the input files, and for which labels will be predicted.\n",
"- `stride_size`: The stride (in [W, H] format) to consider when extracting patches from the tile. Using a stride smaller than the patch size results in overlapping between consecutive patches.\n",
"- `labels` (optional): List of labels with the same size as `imgs` that refers to the label of each input tile (not to be confused with the prediction of each patch).\n",
"\n",
"In this example, we input only one tile. Therefore the toolbox does not save the output as files and instead returns a list that contains an output dictionary with the following keys:\n",
Expand Down Expand Up @@ -815,7 +815,7 @@
"id": "TocLP9Bcr4A4"
},
"source": [
"Here, we show a prediction map where each colour denotes a different predicted category. We overlay the prediction map on the original image. To generate this prediction map, we utilize the `merge_predictions` method from the `PatchPredictor` class which accepts as arguments the path of the original image, `predictor` outputs, `mode` (set to `tile` or `wsi`), `tile_resolution` (at which tiles were originally extracted) and `resolution` (at which the prediction map is generated), and outputs the \"Prediction map\", in which regions have indexed values based on their classes.\n",
"Here, we show a prediction map where each colour denotes a different predicted category. We overlay the prediction map on the original image. To generate this prediction map, we utilize the `merge_predictions` method from the `PatchPredictor` class which accepts as arguments the path of the original image, `predictor` outputs, `mode` (set to `tile` or `wsi`), `tile_resolution` (at which tiles were originally extracted) and `resolution` (at which the prediction map is generated), and outputs the \"Prediction map\", in which regions have indexed values based on their classes.\n",
"\n",
"To visualize the prediction map as an overlay on the input image, we use the `overlay_prediction_mask` function from the `tiatoolbox.utils.visualization` module. It accepts as arguments the original image, the prediction map, the `alpha` parameter which specifies the blending ratio of overlay and original image, and the `label_info` dictionary which contains names and desired colours for different classes. Below we generate an example of an acceptable `label_info` dictionary and show how it can be used with `overlay_patch_prediction`.\n",
"\n"
Expand Down Expand Up @@ -898,7 +898,7 @@
"source": [
"## Get predictions for patches within a WSI\n",
"\n",
"We demonstrate how to obtain predictions for all patches within a whole-slide image. As in previous sections, we will use `PatchPredictor` and its `predict` method, but this time we set the `mode` to `'wsi'`. We also introduce `IOPatchPredictorConfig`, a class that specifies the configuration of image reading and prediction writing for the model prediction engine.\n",
"We demonstrate how to obtain predictions for all patches within a whole-slide image. As in previous sections, we will use `PatchPredictor` and its `predict` method, but this time we set the `mode` to `'wsi'`. We also introduce `IOPatchPredictorConfig`, a class that specifies the configuration of image reading and prediction writing for the model prediction engine.\n",
"\n"
]
},
Expand Down Expand Up @@ -981,7 +981,7 @@
"- `mode`: set to 'wsi' when analysing whole slide images.\n",
"- `ioconfig`: set the IO configuration information using the `IOPatchPredictorConfig` class.\n",
"- `resolution` and `unit` (not shown above): These arguments specify the level or micron-per-pixel resolution of the WSI levels from which we plan to extract patches and can be used instead of `ioconfig`. Here we specify the WSI's level as `'baseline'`, which is equivalent to level 0. In general, this is the level of greatest resolution. In this particular case, the image has only one level. More information can be found in the [documentation](https://tia-toolbox.readthedocs.io/en/latest/usage.html?highlight=WSIReader.read_rect#tiatoolbox.wsicore.wsireader.WSIReader.read_rect).\n",
"- `masks`: A list of paths corresponding to the masks of WSIs in the `imgs` list. These masks specify the regions in the original WSIs from which we want to extract patches. If the mask of a particular WSI is specified as `None`, then the labels for all patches of that WSI (even background regions) would be predicted. This could cause unnecessary computation.\n",
"- `masks`: A list of paths corresponding to the masks of WSIs in the `imgs` list. These masks specify the regions in the original WSIs from which we want to extract patches. If the mask of a particular WSI is specified as `None`, then the labels for all patches of that WSI (even background regions) would be predicted. This could cause unnecessary computation.\n",
"- `merge_predictions`: You can set this parameter to `True` if you wish to generate a 2D map of patch classification results. However, for big WSIs you might need a large amount of memory available to do this on the file. An alternative (default) solution is to set `merge_predictions=False`, and then generate the 2D prediction maps using `merge_predictions` function as you will see later on.\n",
"\n",
"We see how the prediction model works on our whole-slide images by visualizing the `wsi_output`. We first need to merge patch prediction outputs and then visualize them as an overlay on the original image. As before, the `merge_predictions` method is used to merge the patch predictions. Here we set the parameters `resolution=1.25, units='power'` to generate the prediction map at 1.25x magnification. If you would like to have higher/lower resolution (bigger/smaller) prediction maps, you need to change these parameters accordingly. When the predictions are merged, use the `overlay_patch_prediction` function to overlay the prediction map on the WSI thumbnail, which should be extracted at the same resolution used for prediction merging. Below you can see the result:\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/06-semantic-segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@
"\n",
"### Inference on tiles\n",
"\n",
"Much similar to the patch classifier functionality of the tiatoolbox, the semantic segmentation module works both on image tiles and structured WSIs. First, we need to create an instance of the `SemanticSegmentor` class which controls the whole process of semantic segmentation task and then use it to do prediction on the input image(s):\n",
"Much similar to the patch classifier functionality of the tiatoolbox, the semantic segmentation module works both on image tiles and structured WSIs. First, we need to create an instance of the `SemanticSegmentor` class which controls the whole process of semantic segmentation task and then use it to do prediction on the input image(s):\n",
"\n"
]
},
Expand Down
Loading