Skip to content

Releases: matsengrp/netam

v0.2.1

06 Dec 23:50
b59adbd
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.0...v0.2.1

v0.2.0

13 Nov 22:21
2f01d77
Compare
Choose a tag to compare

What's Changed (Initial release)

  • infrastructure and shmoof models by @matsen in #1
  • Hyperparam opt, more models, more flexible training by @matsen in #2
  • Fix branch length optimization using correct loss; WiggleAct by @matsen in #5
  • Adding per-base inference by @matsen in #9
  • Branch length optimization by @matsen in #11
  • masking from child sequences; DNSM model bugfix; Yun branch lengths by @matsen in #14
  • Add CI Tests by @willdumm in #21
  • Standardization fixes; less radical LR reset by @matsen in #20
  • Delete experiment.py by @matsen in #23
  • syncing with epam update; better tensorboard; refactoring by @matsen in #26
  • parallel branch length optimization by @matsen in #28
  • Ability to parallelize between GPUs by @matsen in #30
  • Simplified handling of DNSM datasets by @matsen in #32
  • Bring over epam code to avoid circular dependency by @matsen in #34
  • More flexible training; device bugfix; more hyperparams in yml; renaming to weight_decay by @matsen in #36
  • Bring tests/test_sequences.py from epam by @matsen in #38
  • Bringing over tests/test_molevol.py by @matsen in #40
  • Try "warm up" phase by @matsen in #41
  • Add ability to get attention maps by @matsen in #43
  • Further development of attention maps; no weight decay for 1D parameters by @matsen in #45
  • Minor fixes; pyproject.toml file by @matsen in #47
  • Fix issues having to do with standardization by @matsen in #49
  • don't record loss before training, which fixes incorrect logging by @matsen in #53
  • Add codon_prob.py with a model to adjust codon probs by hit class by @willdumm in #50
  • Format Docstrings by @willdumm in #59
  • Refactor neutral_aa_mut_probs to return per-AA information by @matsen in #62
  • First approximation to a DASM: 20 output dimensions but same loss by @matsen in #64
  • Adding a per-AA loss to the DASM by @matsen in #66
  • Better DASM handling of ambiguous amino acids by @matsen in #68
  • Integrate the multihit model into the DNSM framework by @willdumm in #71
  • Docstrings; multihit device fix by @matsen in #73
  • Split forward functions in DXSM models by @willdumm in #74
  • Renaming to CSP where appropriate, and other related things by @matsen in #75
  • Replacing normalize_sub_probs with a check; fixing consistency problems by @matsen in #77
  • Add loss weights keyword argument to DASMBurrito constructor by @willdumm in #78
  • Release cleanup; weights-only crepe loading by @matsen in #80
  • Release of thrifty models; pretrained module; demo notebook by @matsen in #81
  • moving shared code to dxsm.py; add_shm_model_outputs_to_pcp_df by @matsen in #83
  • Publish to PyPI by @willdumm in #82

New Contributors

Full Changelog: https://github.com/matsengrp/netam/commits/v0.2.0