Skip to content

Releases: tensorflow/gnn

v1.0.3

13 May 07:18
99d810e
Compare
Choose a tag to compare

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Use with incompatible Keras v3 raises a clear error.
      • As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via TF_USE_LEGACY_KERAS=1.
      • Releases 1.0.0 to 1.0.2 had a pip package requirement for TF <2.16 but could be made to work the same way.
    • Minimum supported TF/Keras version moved to >=2.12.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.3 over v1.0.2

  • Support TF2.16+ via TF_USE_LEGACY_KERAS=1: updated setup.py, docs and error messages (605b552)

Full Changelog: v1.0.2...v1.0.3

What's Changed in v1.0.2 over v1.0.1

  • Bugfixes for use with TF 2.14 and 2.15 in case tf_keras is installed but not used as tf.keras (ffa453f) (e1d9210).

Full Changelog: v1.0.1...v1.0.2

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.3rc0

24 Apr 06:59
Compare
Choose a tag to compare
v1.0.3rc0 Pre-release
Pre-release

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Use with incompatible Keras v3 raises a clear error.
      • As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via TF_USE_LEGACY_KERAS=1.
      • Releases 1.0.0 to 1.0.2 had a pip package requirement for TF <2.16 but could be made to work the same way.
    • Minimum supported TF/Keras version moved to >=2.12.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.3 over v1.0.2

  • Support TF2.16+ via TF_USE_LEGACY_KERAS=1: updated setup.py, docs and error messages (605b552)

Full Changelog: v1.0.2...v1.0.3rc0

What's Changed in v1.0.2 over v1.0.1

  • Bugfixes for use with TF 2.14 and 2.15 in case tf_keras is installed but not used as tf.keras (ffa453f) (e1d9210).

Full Changelog: v1.0.1...v1.0.2

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.2

06 Feb 17:31
2947e2c
Compare
Choose a tag to compare

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Supported TF/Keras versions moved to >=2.12,<2.16, incompatible Keras v3 raises a clear error.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.2 over v1.0.1

  • Bugfixes for use with TF 2.14 and 2.15 in case tf_keras is installed but not used as tf.keras (ffa453f) (e1d9210).

Full Changelog: v1.0.1...v1.0.2

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.2rc1

06 Feb 16:38
Compare
Choose a tag to compare
v1.0.2rc1 Pre-release
Pre-release

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Supported TF/Keras versions moved to >=2.12,<2.16, incompatible Keras v3 raises a clear error.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.2 over v1.0.1

  • Bugfixes for use with TF 2.14 and 2.15 in case tf_keras is installed but not used as tf.keras (ffa453f) (e1d9210).

Full Changelog: v1.0.1...v1.0.2rc1

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.2rc0

06 Feb 14:17
Compare
Choose a tag to compare
v1.0.2rc0 Pre-release
Pre-release

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Supported TF/Keras versions moved to >=2.12,<2.16, incompatible Keras v3 raises a clear error.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.2 over v1.0.1

  • Bugfix for use with TF 2.14 and 2.15 in case tf_keras is installed but not used as tf.keras (ffa453f).

Full Changelog: v1.0.1...v1.0.2rc0

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.1

02 Feb 09:24
Compare
Choose a tag to compare

Release 1.0 is the first with a stable public API.

What's Changed in r1.0

  • Overall
    • Supported TF/Keras versions moved to >=2.12,<2.16, incompatible Keras v3 raises a clear error.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

What's Changed in v1.0.1 over v1.0.0

  • Bugfix for regression tasks runner.GraphMean*Error: the reduce_type is again passed through correctly
    (19c10f2).

Full Changelog: v1.0.0...v1.0.1

v1.0.0

19 Dec 08:51
Compare
Choose a tag to compare

First release with a stable public API.

What's Changed

  • Overall
    • Supported TF/Keras versions moved to >=2.12,<2.16, incompatible Keras v3 raises a clear error.
    • Importing the library no longer leaks private module names.
    • All parts of the GraphSchema protobuf are now exposed undertfgnn.proto.*.
    • Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
    • Numerous small bug fixes.
  • Subgraph sampling: major upgrade
    • New and unified sampler for in-memory and beam-based subgraph sampling.
    • Module tfgnn.experimental.in_memory is removed in favor of the new sampler.
    • New console script tfgnn_sampler replaces the old tfgnn_graph_sampler.
  • GraphTensor
    • Most tfgnn.* functions on GraphTensor now work in Keras' Functional API, including the factory methods GraphTensor.from_pieces(...) etc.
    • New static checks for GraphTensor field shapes, opt out with tfgnn.disable_graph_tensor_validation().
    • New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with tfgnn.enable_graph_tensor_validation_at_runtime().
    • GraphTensor maintains .row_splits_dtype separately from .indices_dtype.
    • The GraphSchema and the I/O functions for tf.Example now support all non-quantized, non-complex floating-point and integer types as well as bool and string.
    • Added convenience wrapper tfgnn.pool_neighbors_to_node().
    • Misc fixes to tfgnn.random_graph_tensor(), now respects component boundaries.
  • Runner
    • New tasks for link prediction and node classification/regression based on structured readout.
    • Now comes with API docs.
  • Models collection
    • models/contrastive_losses gets multiple extensions, including a triplet loss and API docs.
    • models/multi_head_attention replaces sigmoid with elu+1 in trained scaling.
    • Bug fixes for mixed precision.

Full Changelog: v0.6.1...v1.0.0

v1.0.0rc0

14 Dec 16:59
edd7b04
Compare
Choose a tag to compare
v1.0.0rc0 Pre-release
Pre-release

Initial release candidate for v1.0.0.

v1.0.0.dev2

13 Dec 17:08
Compare
Choose a tag to compare
v1.0.0.dev2 Pre-release
Pre-release

Early developmental release of tensorflow-gnn 1.0.0 code; docs still unfinished.

v0.6.1

06 Dec 16:26
d6e9314
Compare
Choose a tag to compare
  • Running import tensorflow_gnn now checks if the version of tf.keras is compatible.