1.6.1
Summary
- support for tf-1.15
- experimental support for tf-2.1 and tf-2.2
(there is a remaining issues with lstm support for tf-2.x) - support for opset 12
Changes since v1.5.6
- add support for tf.math.is_finite #936
- tutorial how to convert efficientdet to onnx #937
- enable all cond ut for tf-2.x #962
- dims can be a list #956
- Enable tf22 ci #955
- MatrixDiagV1&V2&V3&MatrixSetDiagV3 #935
- MatrixDiagPartV3: Change consts to dynamic ops #948
- Add new pattern for RandomStandardNormal op in TF2 #949
- Update version support for opset 12 operators #947
- matrixdiagpartv3 #942
- fix transpose optimizer for slice op #934
- Add half pixel transformation to resize bilinear op #932
- Add stacked LSTM support #925
- Activate opset12 tests #923
- Multiple fixes for Bert Model (fine-tuned) #929
- Add 2 more handlers for Tranpose: Exp and Log #928
- Support for QuantizeAndDequantize operation #919
- Ensure scalar values only in MatrixDiagPart->Range() function #924
- Fix UnicodeDecode error #922
- Ignore shape inference warnings for FusedBatchNormV3:5 #916
- Fix LSTM pattern matching for version between 1.15.0 and 2.x. #913
- handle softplus in transpose optimizer, needed for mish #908
- fix split in case of splits are negavitve #891
- move some ops to generators.py, new version of supported ops doc #888
- add tf_optimize back to tf2onnx since apps are using it #882
- ReverseV2 - fix shape computations #909
- Fix Transpose + Pad handler, for Keras app MobilenetV2 model #907
- Fix GEMM to check for shape broadcast compatibility of A*B and C #906
- Some ops for opset 12. #903
- opset 12 support #897
- Fix NonMaxSuppression #895
- Support MatrixDiagPart v2 and v3 #890
- Adds Sum(Transpose(x1), Transpose(x2),...) optimizer. #884
- Add Keras apps, ResNet50 model test #880
- Add getting started section to README #877
- resolve warnings and recommendations from LGTM.com #879
- map bfloat to float16 #878
- refactor resize #874
- Fix typo (no function change) #873
- Fix scatternd - inputs bound to different type #870
- Add fusion for Conv2D+ BatchNormalization #871
- dynamic random #869
- zero like bool #866
- use same opset->ir mapping as in r1.5 branch #867
A huge thank you to our contributors
Anders Huss, Buddha Puneeth Nandanoor, Chin Huang, Dheeraj Peri, Emma Yu, Holger Finger, Johannes Dobler, Nikita Pokidyshev, PreethaVeera, Satyajith, Tian Jin, Vincent Delaitre, alexG, anttisaukko, dheerajperi, dirkbrink, mindest, simpeng, ziyuang