Skip to content

Commit

Permalink
merge a few PRs for 1.6.2
Browse files Browse the repository at this point in the history
  • Loading branch information
guschmue committed Jun 17, 2020
2 parents 9ad16b1 + f5e4ed3 commit 8d52538
Show file tree
Hide file tree
Showing 10 changed files with 139 additions and 138 deletions.
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
| --- | --- | --- | --- | --- | --- |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |

## Supported Versions

### ONNX

tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.

We support opset 6 to 11. By default we use opset 8 for the resulting ONNX graph since most runtimes will support opset 8.
We support ONNX opset-6 to opset-12. By default we use opset-8 for the resulting ONNX graph since most runtimes will support opset-8.
Support for future opsets add added as they are released.

If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
Expand All @@ -20,13 +20,14 @@ If you want the graph to be generated with a specific opset, use ```--opset``` i

We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.

There is now ```experimental support for tf-2.x```. Basic unit tests are passing as well as control flow.
There is now ```experimental support for tf-2.x```.
With the exception of LSTM unit tests, all unit tests are enabled and passing.
Unit tests that we still need to fix are marked with ```@skip_tf2```.
GRU/LSTM's are converting but not runnable due to type/shape inference issues at runtime (working on that one).
All unit tests are running in eager mode and after execution we take the python function, make it a graph and convert this to onnx.
If running under tf-2.x we are using the tensorflow V2 controlflow.
All unit tests are running in eager mode. After execution we take the python function, make it a graph and convert it to ONNX.
When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.

You can install tf2onnx on top of tf-1.x or tf-2.x and convert tf-1.x or tf-2.x models.
You can install tf2onnx on top of tf-1.x or tf-2.x.

### Python

Expand Down
2 changes: 1 addition & 1 deletion VERSION_NUMBER
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.6.1
1.6.2
10 changes: 9 additions & 1 deletion tf2onnx/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ def set_tensor_value(self, new_val):
self.set_attr("value", onnx_tensor)
# track shapes in _output_shapes
self._graph_check()
self.graph.set_shape(onnx_tensor.name, onnx_tensor.dims)
self.graph.set_shape(onnx_tensor.name, list(onnx_tensor.dims))

def get_body_graphs(self):
self._graph_check()
Expand Down Expand Up @@ -484,6 +484,14 @@ def inputs(self):
all_inputs.append(n)
return all_inputs

def make_consts(self, values, np_type=np.int64, skip_conversion=False, raw=True):
"""create list of consts of same type"""
consts = []
for value in values:
np_val = np.array(value).astype(np_type)
consts.append(self.make_const(utils.make_name("const"), np_val, skip_conversion, raw))
return consts

def make_const(self, name, np_val, skip_conversion=False, raw=True):
"""Make a new constant in the graph.
Args:
Expand Down
12 changes: 12 additions & 0 deletions tf2onnx/onnx_opset/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,3 +194,15 @@ def version_8(cls, ctx, node, **kwargs):
ctx.remove_node(node.name)
ctx.add_graph_input(output_names[0], type_0, shape_0)
ctx.add_graph_input(output_names[1], type_1, shape_1)


@tf_op("QueueDequeueManyV2")
class QueueDequeueManyV2:
@classmethod
def version_8(cls, ctx, node, **kwargs):
outputs = node.output
shapes = node.output_shapes
dtypes = node.output_dtypes
ctx.remove_node(node.name)
for i, output in enumerate(outputs):
ctx.add_graph_input(output, dtypes[i], shapes[i])
4 changes: 3 additions & 1 deletion tf2onnx/onnx_opset/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,7 @@ def version_1(cls, ctx, node, **kwargs):
# Note: inputs are reversed from what one would expect.
conv_kernel_shape(ctx, node, 1)
input_shape = ctx.get_shape(node.input[2])
output_shape_orig = node.output_shapes

# ouput_shape is explicitly specified here, in this case pads values are auto generated/calculated.
if node.inputs[0].is_const():
Expand Down Expand Up @@ -285,7 +286,8 @@ def version_1(cls, ctx, node, **kwargs):
const_one_two = ctx.make_const(utils.make_name(node.name + "_const_one_two"),
np.array([1, 2], dtype=np.int64))
slice_node = ctx.make_node("Slice",
[node.output[0], starts.output[0], ends.output[0], const_one_two.output[0]])
[node.output[0], starts.output[0], ends.output[0], const_one_two.output[0]],
shapes=output_shape_orig)
downstream_nodes = ctx.find_output_consumers(node.output[0])
downstream_nodes.remove(output_shape)
downstream_nodes.remove(slice_node)
Expand Down
Loading

0 comments on commit 8d52538

Please sign in to comment.