A multi-task deep learning model for pathology image grading conducts categorical classification,
and auxiliary ordinal classification for Cancer Grading in Pathology Images uses a L_CEO loss for the auxiliary ordinal task.
Link to Medical Image Analysis paper.
All the models in this project were evaluated on the following datasets:
- Colon_KBSMC (Colon TMA from Kangbuk Samsung Hospital)
- Colon_KBSMC (Colon WSI from Kangbuk Samsung Hospital)
- Prostate_UHU (Prostate TMA from University Hospital Zurich - Harvard dataverse)
- Prostate_UBC (Prostate TMA from UBC - MICCAI 2019)
conda env create -f environment.yml
conda activate jco_learning
pip install torch~=1.8.1+cu111
Above, we install PyTorch version 1.8.1 with CUDA 11.1. The code still work older Pytorch version (PyTorch >=1.1).
Below are the main directories in the repository:
dataloader/
: the data loader and augmentation pipelinedocs/
: figures/GIFs used in the repomisc/
: utils that aremodel_lib/
: model definition, along with the main run step and hyperparameter settingsscript/
: defines the training loop
Below are the main executable scripts in the repository:
config.py
: configuration fileconfig_validator.py
: still configuration file but for validation/test phrase or generate the predicted mapsdataset.py
: defines the dataset classestrain_val.py
: main training scripttrain_val_ceo_for_cancer_only.py
: still training script but ordinal loss only applied to cancer classes (benign class is excluded)infer_produce_predict_map_wsi.py
: following sliding window fashion to generate a predicted map or probability map for WSI/core image
python train_val.py [--gpu=<id>] [--run_info=<task_name + loss function>] [--dataset=<colon/prostate>]
Options: ** Our proposed and 9 common/state-of-the-art categorical and ordinal classification methods, including:**
METHOD | run_info | Description |
---|---|---|
C_CE | CLASS_ce | Classification: Cross-Entropy loss |
C_FOCAL | CLASS_FocalLoss | Classification: Focal loss, Focal loss for dense object detection [paper] |
R_MAE | REGRESS_mae | Regression: MAE loss |
R_MSE | REGRESS_mse | Regression: MSE loss |
R_SL | REGRESS_soft_label | Regression: Soft-Label loss, Deep learning regression for prostate cancer detection and grading in Bi-parametric MRI [paper] |
O_DORN | REGRESS_rank_dorn | Ordinal regression: Deep ordinal regression network for monocular depth estimation [paper] [code] |
O_CORAL | REGRESS_rank_coral | Ordinal regression: Rank consistent ordinal regression for neural networks with application to age estimation [paper] [code] |
O_FOCAL | REGRESS_FocalOrdinal | Ordinal regression: Joint prostate cancer detection and Gleason score prediction in mp-MRI via FocalNet [paper] |
M_MTMR | MULTI_mtmr | Multitask: Multi-task deep model with margin ranking loss for lung nodule analysis [paper] [code] |
M_MAE | MULTI_ce_mae | Multitask: Class_CE + Regression_MAE |
M_MSE | MULTI_ce_mse | Multitask: Class_CE + Regression_MSE |
M_MAE_CEO | MULTI_ce_mae_ceo | Multitask: Class_CE + Regression_MAE_CEO (Ours) |
M_MSE_CEO | MULTI_ce_mae_ceo | Multitask: Class_CE + Regression_MSE_CEO (Ours) |
python infer_produce_predict_map_wsi.py [--gpu=<id>] [--run_info=<task_name + loss function>]
Model weights obtained from training MULTI_ce_mse_ceo here:
Access the entire checkpoints here.
If any of the above checkpoints are used, please ensure to cite the corresponding paper.
- Trinh, TL Vuong, Kim, Kyungeun and Song, Boram Jin Tae Kwak
If any part of this code is used, please give appropriate citation to our paper.
BibTex entry:
@article{le2021joint,
title={Joint categorical and ordinal learning for cancer grading in pathology images},
author={Le Vuong, Trinh Thi and Kim, Kyungeun and Song, Boram and Kwak, Jin Tae},
journal={Medical image analysis},
pages={102206},
year={2021},
publisher={Elsevier}
}