Skip to content

Releases: Dobiasd/frugally-deep

v0.7.4-p0

05 Jan 17:41
Compare
Choose a tag to compare

Support for Permute layer added.

v0.7.3-p0

02 Dec 10:40
Compare
Choose a tag to compare

Fix invalid shape of bias matrix in GRU for reset_after=False

v0.7.2-p0

01 Dec 09:27
Compare
Choose a tag to compare

Calculate hash from model weights and store to JSON

v0.7.1-p0

30 Nov 06:04
Compare
Choose a tag to compare
  • Add support for Embedding layers
  • Add support for GRU layers
  • Fix missing inlines resulting in duplicate function definitions on some compilers

v0.7.0-p0

13 Nov 13:01
Compare
Choose a tag to compare

Adds support for the following layer types:

  • Bidirectional
  • LSTM
  • TimeDistributed

v0.6.0-p0

09 Nov 18:00
Compare
Choose a tag to compare

API-breaking change: Increase dimensionality of tensors from 3 to 5 (needed for future development, like LSTMs and TimeDistributed):

  • tensor3 -> tensor5
  • shape_hwc -> shape5
  • tensor3::get_yxz -> tensor5::get
  • tensor3::set_yxz -> tensor5::set

Feature:

  • Enable dense layers for non-flattened input tensors.

Fix:

  • Safeguard softmax implementation against NaNs.

v0.5.4-p0

11 Oct 14:34
Compare
Choose a tag to compare
  • Slight performance improvements

v0.5.3-p0

07 Oct 16:50
Compare
Choose a tag to compare
  • Fix PReLU layer with shared axes and variable shapes

v0.5.2-p0

07 Oct 08:45
Compare
Choose a tag to compare
  • minor cleanup

v0.5.1-p0

07 Oct 08:35
Compare
Choose a tag to compare
  • Add PReLU layer with shared_axes.
  • Add compatibility to Eigen 3.2.x.
  • Add parametrized ReLU layers, drop relu6.
  • Switch from channel_first to channel_last order. This is an API-breaking change:
    • Models converted from .h5 to .json with older versions have to be re-converted.
    • shape3 was dropped, shape_hwc was added instead.
    • tensor3_pos was dropped, tensor3_pos_yxz was added instead.
    • tensor3::get was dropped, tensor3::get_yxz was added instead.
    • tensor3::set was dropped, tensor3::set_yxz was added instead.