Skip to content

Model Evaluation

Aayush Grover edited this page May 13, 2025 · 3 revisions

An example script to evaluate a model using asap has been defined in tutorials/eval.py.

Create a peak or whole-genome dataset for evaluation on generalizability

asap.peak_dataset(signal_file, peak_file, genome, chroms, generated, blacklist_file=None, unmap_file=None)

or

asap.wg_dataset(signal_file, genome, chroms, generated, blacklist_file=None, unmap_file=None)

Creates a peak or whole-genome dataset for evaluation.

Args:

  • signal_file (str): Path to the signal file.
  • peak_file (str): Path to the peak file.
  • genome (str): Path to the genome file.
  • chroms (List[int]): List of chromosomes for evaluation.
  • generated (str): Path to the generated data.
  • blacklist_file (List[str]): List of paths to blacklist files (including SNV VCFs).
  • unmap_file (str): Path to the unmapped regions file.

Returns:

  • test_dataset (asap.dataloader.BaseDataset): Test dataset (either peak or whole-genome)

Evaluate a pre-trained model on generalizability

asap.eval_model(experiment_name, model, eval_dataset, logs_dir, batch_size=64, use_map=False)

Evaluates the pre-trained model on the peak or whole-genome dataset.

Args:

  • experiment_name (str): The name of the experiment. This will be used to load model checkpoints.
  • model (str): The model name to evaluate. Choose from [cnn, lstm, dcnn, convnext_cnn, convnext_lstm, convnext_dcnn, convnext_transformer].
  • eval_dataset (asap.dataloader.BaseDataset): The test dataset used for model evaluation.
  • logs_dir (str): The directory to load model checkpoints from.
  • batch_size (int): The batch size for evaluation.
  • use_map (bool): If mappability information was used during training.

Returns:

  • scores (Dict[Dict]): For each test chromosome, a dictionary with Pearson's correlation (pearson_r), Mean squared error (mse), Poisson negative log-likelihood (poisson_nll), Spearman's correlation (spearman_r), and Kendall's Tau (kendall_tau).

Clone this wiki locally