Releases: lmnt-com/haste
Releases · lmnt-com/haste
Haste 0.5.0-rc0
v0.5.0-rc0 Bump version to 0.5.0-rc0 in preparation for PyPI release.
Haste 0.4.0
Added
- New layer normalized GRU layer (
LayerNormGRU
). - New IndRNN layer.
- CPU support for all PyTorch layers.
- Support for building PyTorch API on Windows.
- Added
state
argument to PyTorch layers to specify initial state. - Added weight transforms to TensorFlow API (see docs for details).
- Added
get_weights
method to extract weights from RNN layers (TensorFlow). - Added
to_native_weights
andfrom_native_weights
to PyTorch API forLSTM
andGRU
layers. - Validation tests to check for correctness.
Changed
- Performance improvements to GRU layer.
- BREAKING CHANGE: PyTorch layers default to CPU instead of GPU.
- BREAKING CHANGE:
h
must not be transposed before passing it togru::BackwardPass::Iterate
.
Fixed
- Multi-GPU training with TensorFlow caused by invalid sharing of
cublasHandle_t
.
Haste 0.3.0
Added
- PyTorch support.
- New layer normalized LSTM layer (
LayerNormLSTM
). - New fused layer normalization layer.
Fixed
- Occasional uninitialized memory use in TensorFlow LSTM implementation.
Haste 0.2.0
This release focuses on LSTM performance.
Added
- New time-fused API for LSTM (
lstm::ForwardPass::Run
,lstm::BackwardPass::Run
). - Benchmarking code to evaluate the performance of an implementation.
Changed
- Performance improvements to existing iterative LSTM API.
- BREAKING CHANGE:
h
must not be transposed before passing it tolstm::BackwardPass::Iterate
. - BREAKING CHANGE:
dv
does not need to be allocated andv
must be passed instead tolstm::BackwardPass::Iterate
.
Haste 0.1.0
Initial release.