Skip to content

Commit

Permalink
make segmentation available on CPU
Browse files Browse the repository at this point in the history
  • Loading branch information
pauldoucet committed Jun 12, 2024
1 parent b963e99 commit dbc5a45
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 5 deletions.
14 changes: 10 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
# HEST

#### What does the hest library provide?
- <b>Missing file</b> imputation and automatic alignment for Visium
- <b>Fast</b> functions for pooling transcripts and tesselating ST/H&E pairs into patches (these functions are GPU optimized with CUCIM if CUDA is available).
- Functions for interacting with the HEST-1K dataset
- Deep learning based or Otsu based <b>tissue segmentation</b> for both H&E and IHC stains
- Compatibility with <b>Scanpy</b> and <b>SpatialData</b>

Hest provides legacy readers for the different Spatial Transcriptomics data formats supporting H&E (Visium/Visium-HD, Xenium and ST) and for automatically aligning them with their associated histology image. Hest was used to assemble the HEST-1k dataset, processing challenging ST datasets from a wide variety of sources and converting them to formats commonly used in pathology (.tif, Scanpy AnnData). The framework also provides helper functions for pooling transcripts and tesselating slides into patches centered around ST spots.
Hest was used to assemble the HEST-1k dataset, processing challenging ST datasets from a wide variety of sources and converting them to formats commonly used in pathology (.tif, Scanpy AnnData).

<p align="center">
<img src="figures/fig1.png" alt="Description" style="width: 70%;"/>
Expand Down Expand Up @@ -46,11 +52,11 @@ apt install libvips libvips-dev openslide-tools

NOTE: hest was only tested on linux/macOS machines, please report any bugs in the GitHub issues.

## Install CONCH/UNI (Optional, for HEST-bench only)
### CONCH/UNI installation (Optional, for HEST-bench only)

If you want to benchmark CONCH/UNI, additional steps are necesary

### Install CONCH (model + weights)
#### CONCH installation (model + weights)

1. Request access to the model weights from the Huggingface model page [here](https://huggingface.co/MahmoodLab/CONCH).

Expand All @@ -64,7 +70,7 @@ cd CONCH
pip install -e .
```

### Install UNI (weights only)
#### UNI installation (weights only)

1. Request access to the model weights from the Huggingface model page [here](https://huggingface.co/MahmoodLab/UNI).

Expand Down
6 changes: 5 additions & 1 deletion src/hest/segmentation/segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,11 @@ def segment_tissue_deep(img: Union[np.ndarray, openslide.OpenSlide, 'CuImage', W
stride=1
)

checkpoint = torch.load(weights_path)
if torch.cuda.is_available():
checkpoint = torch.load(weights_path)
else:
checkpoint = torch.load(weights_path, map_location=torch.device('cpu'))

new_state_dict = {}
for key in checkpoint['state_dict']:
if 'aux' in key:
Expand Down

0 comments on commit dbc5a45

Please sign in to comment.