Oncology FM Evaluation Framework by kaiko.ai
Installation β’
How To Use β’
Quick Start β’
Documentation β’
Datasets β’
Leaderboard
Contribute β’
Acknowledgements
eva is an evaluation framework for oncology foundation models (FMs) by kaiko.ai.
Check out the documentation for more information.
- π¬ Standardized benchmarking for oncology foundation models
- π§ Supports patch-level/slide-level/scan-level classification, 2D/3D semantic segmentation, and VQA tasks
- β‘ Offline + online evaluation modes
- π¦ Built-in support for popular medical datasets and models
- π Robust evaluation with multi-run statistics
- π§© Fully configurable via YAML or Python
Simple installation from PyPI:
# to install the core version only
pip install kaiko-eva
# to install the expanded `vision` version
pip install 'kaiko-eva[vision]'
# to install the expanded `language` version
pip install 'kaiko-eva[language]'
# to install the expanded `multimodal` version
pip install 'kaiko-eva[multimodal]'
# to install everything
pip install 'kaiko-eva[all]'To install the latest version of the main branch:
pip install "kaiko-eva[all] @ git+https://github.com/kaiko-ai/eva.git"You can verify that the installation was successful by executing:
eva --versioneva can be used directly from the terminal as a CLI tool as follows:
eva {fit,predict,predict_fit} --config url/or/path/to/the/config.yaml eva uses jsonargparse to
make it easily configurable by automatically generating command line interfaces (CLIs),
which allows to call any Python object from the command line. Moreover, the configuration structure is always in sync with the code. Thus, eva can be used either directly from Python or as a CLI tool (recommended).
For more information, please refer to the documentation.
Learn about Configs
The following interfaces are identical:
| Python interface | Configuration file |
|---|---|
# main.py
# execute with: `python main.py`
from torch import nn
from eva import core
from eva.vision import datasets, transforms
# initialize trainer
trainer = core.Trainer(max_steps=100)
# initialize model
model = core.HeadModule(
backbone=nn.Flatten(),
head=nn.Linear(150528, 4),
criterion=nn.CrossEntropyLoss(),
)
# initialize data
data = core.DataModule(
datasets=core.DatasetsSchema(
train=datasets.BACH(
root="data/bach",
split="train",
download=True,
transforms=transforms.ResizeAndCrop(),
),
),
dataloaders=core.DataloadersSchema(
train=core.DataLoader(batch_size=32),
),
)
# perform fit
pipeline = core.Interface()
pipeline.fit(trainer, model=model, data=data) |
# main.yaml
# execute with: `eva fit --config main.yaml`
---
trainer:
class_path: eva.Trainer
init_args:
max_steps: 100
model:
class_path: eva.HeadModule
init_args:
backbone: torch.nn.Flatten
head:
class_path: torch.nn.Linear
init_args:
in_features: 150528
out_features: 4
criterion: torch.nn.CrossEntropyLoss
data:
class_path: eva.DataModule
init_args:
datasets:
train:
class_path: eva.vision.datasets.BACH
init_args:
root: ./data/bach
split: train
download: true
transforms: eva.vision.transforms.ResizeAndCrop
dataloaders:
train:
batch_size: 32 |
The .yaml file defines the functionality of eva
by parsing and translating its content to Python objects directly.
Native supported configs can be found at the
configs directory
of the repo, which can be both locally stored or remote.
Offline classification DINO ViT-S/16 on the BACH dataset:
# set the model architecture
DOWNLOAD_DATA=true \
MODEL_NAME=universal/vit_small_patch16_224_dino \
\
# execute the offline evaluation pipeline with the BACH dataset config
eva predict_fit \
--config https://raw.githubusercontent.com/kaiko-ai/ \
eva/main/configs/vision/pathology/ \
offline/classification/bach.yamlOnline segmentation of DINO ViT-S/16 on the MoNuSAC dataset with the ConvDecoderWithImage decoder:
# define the model backbone
DOWNLOAD_DATA=true \
MODEL_NAME=universal/vit_small_patch16_224_dino \
\
# execute online segmentation training for MoNuSAC Dataset
eva fit \
--config https://raw.githubusercontent.com/kaiko-ai/ \
eva/main/configs/vision/pathology/ \
online/segmentation/monusac.yamlThe results of 5 different runs will be saved to ./logs by default, or to OUTPUT_ROOT if specified. For more examples, take a look at the configs
and tutorials.
The following table shows the FMs we have evaluated with eva. For more detailed information about the evaluation process, please refer to our documentation.
eva is an open source project and welcomes contributions of all kinds. Please checkout the developer
and contributing guide for help on how to do so.
All contributors must follow the code of conduct.
Our codebase is built using multiple opensource contributions
If you find this repository useful, please consider giving a star β and adding the following citation:
@inproceedings{kaiko.ai2024eva,
title={eva: Evaluation framework for pathology foundation models},
author={kaiko.ai and Ioannis Gatopoulos and Nicolas K{\"a}nzig and Roman Moser and Sebastian Ot{\'a}lora},
booktitle={Medical Imaging with Deep Learning},
year={2024},
url={https://openreview.net/forum?id=FNBQOPj18N}
}