Skip to content

Commit

Permalink
[TF FE] Port operation loaders and refactor preliminary versions (ope…
Browse files Browse the repository at this point in the history
…nvinotoolkit#8150)

* Migrate POC for TensorFlow frontend

Signed-off-by: Roman Kazantsev <[email protected]>

* Refactor InputModelTensorFlow API

Signed-off-by: Roman Kazantsev <[email protected]>

* Repack POC to official API

Signed-off-by: Roman Kazantsev <[email protected]>

* Remove tensorflow API from public include

Signed-off-by: Roman Kazantsev <[email protected]>

* Make TF frontend work from MO and clean-up code

Signed-off-by: Roman Kazantsev <[email protected]>

* Apply codestyle

* Fix win biuld

* Fix Linux build

Signed-off-by: Roman Kazantsev <[email protected]>

* Implement Place class

Signed-off-by: Roman Kazantsev <[email protected]>

* Determine outputs from graph

* Implement all Place classes

Signed-off-by: Roman Kazantsev <[email protected]>

* Make small clean-up

Signed-off-by: Roman Kazantsev <[email protected]>

* Apply code-style corrections

Signed-off-by: Roman Kazantsev <[email protected]>

* Determine cut nodes

* Apply codestyle

* Rework to use places

* Fix conversion issue

* Fix build

* Fix conversion

* Small fixes

* Add test for tf frontend

* Add tests

* Implement partial conversion

* Use dynamic type in TFFrameworkNode

* Fix build on Linux

* Implement InputModelTF class

Signed-off-by: Roman Kazantsev <[email protected]>

* Fix code by replacing InputModelTensorFlow to InputModelTF

Signed-off-by: Roman Kazantsev <[email protected]>

* Fix to pass getPlaceByTensorName test

Signed-off-by: Roman Kazantsev <[email protected]>

* Refactor and clean the code

Signed-off-by: Roman Kazantsev <[email protected]>

* Finalize refactoring code

Signed-off-by: Roman Kazantsev <[email protected]>

* Support freezing inputs

Signed-off-by: Roman Kazantsev <[email protected]>

* Add support for pruning input ports as new model output

Signed-off-by: Roman Kazantsev <[email protected]>

* Apply code-style fixes

Signed-off-by: Roman Kazantsev <[email protected]>

* move op convertors to separate files, refactoring

* openvino codestyle

* openvino codestyle

* fix crash of layer tests

* fix missprint

* Implement TensorFlow NodeContext and DecoderTFProto classes

Signed-off-by: Roman Kazantsev <[email protected]>

* Switch to new NodeContext

Signed-off-by: Roman Kazantsev <[email protected]>

* Remove ngraph_builder class and node_context_impl class

Signed-off-by: Roman Kazantsev <[email protected]>

* Move decoder/graph_iterator to separate files and remove old files

Signed-off-by: Roman Kazantsev <[email protected]>

* Document Decoder, GraphIterator, and NodeContext classes

Signed-off-by: Roman Kazantsev <[email protected]>

* Apply code style

Signed-off-by: Roman Kazantsev <[email protected]>

* Remove empty file graph_iterator_proto.cpp and redundant comments

Signed-off-by: Roman Kazantsev <[email protected]>

* Use base class for GraphIterator in model class and correct exception class

Signed-off-by: Roman Kazantsev <[email protected]>

* Use ends_with from util library

* Remain only InputModelTF constructor with GraphIterator and adopt other code

Signed-off-by: Roman Kazantsev <[email protected]>

* Correct code after merge

Signed-off-by: Roman Kazantsev <[email protected]>

* Apply code style

Signed-off-by: Roman Kazantsev <[email protected]>

* Fix code based on feedback: delete extra namespace usage, etc.

Signed-off-by: Roman Kazantsev <[email protected]>

* refactoring of tf FrontEnd: rename namespaces, delete default opset

* codestyle

* fix e2e tests

* change namespaces of external classes

* resolve review comment

* codestyle

* Enable translators for Size,Shape,Rank,Range,Reshape ops

* Add translators for MatMul,Reciprocal,Square,XdivY ops

* enable Translators for Where,Log1p, Transpose, ZerosLike, Pack ops

* Enable Split,IsFinite,Tile ops, refactor Reduce ops

* Add Reverse,Round ops

* fix codestyle

* Enable Unpack, L2Loss ops

* Add LRN, GatherND, TopK ops, fix Reduce ops

* codestyle

* Revising of StridedSlice,SplitV,SpaceToDepth ops

* Add Concat,FusedBatchNormEx,Slice,SpaceToDepth ops

* Add Interpolate,BatchToSpaceNd,SpaceToBatchNd,NonMaxSuppression ops support

* codestyle

* Port CropAndResize,FakeQuantMinMaxVars,FusedDepthwiseConv2d

* fix translators

* codestyle

* Resolve review remarks

* fix wrong merge

* fix incorrect merge, refactoring

* add LeakyRelu op

* codestyle

* Add LogicalXor operation

* Add support for Swish op, set correct tensor names, refactoring

* fix incorrect merge

* codestyle

* fix unit tests

* fix build

* Refactoring

* codestyle

* fix win build

* fix reduce op

* Investigate failures on Windows

* add debug prints

* debug prints

* debug prints

* Delete debug prints

* clean up

* clean up

* codestyle

* delete debug changes

* Delete redandant comments

* rename utils functions

* rename translators

* rename layout convertors

* resolve review comments

* resolve review comments:

* codestyle

* rename NodeContext methods

* add todo comment

* Remove internal tf ops from op_table

* fix decode

Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: Maxim Vafin <[email protected]>
  • Loading branch information
3 people authored Nov 8, 2021
1 parent 279d905 commit 5dacaa3
Show file tree
Hide file tree
Showing 87 changed files with 2,340 additions and 1,498 deletions.
4 changes: 2 additions & 2 deletions ngraph/core/src/op/unsqueeze.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@ bool evaluate_unsqueeze(const HostTensorPtr& arg0, const HostTensorPtr& arg1, co
auto data_shape = arg0->get_shape();
int64_t data_rank = static_cast<int64_t>(data_shape.size());
auto axes_shape = arg1->get_shape();
NGRAPH_CHECK(axes_shape.size() == 1, "Axes to add must be a vector.");
NGRAPH_CHECK(axes_shape[0] > 0, "Axes cannot be empty.");
NGRAPH_CHECK(axes_shape.size() == 1 || axes_shape.empty(),
"Axes to add must be a scalar or 1D tensor with 1 element");

auto out_shape = data_shape;
int64_t out_rank = data_rank + static_cast<int64_t>(shape_size(axes_shape));
Expand Down
4 changes: 4 additions & 0 deletions ngraph/frontend/frontend_manager/src/plugin_loader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

#include <sys/stat.h>

#include <ngraph/log.hpp>
#include <string>
#include <vector>

Expand All @@ -29,10 +30,12 @@ using namespace ngraph::frontend;
# define DLOPEN(file_str) LoadLibrary(TEXT(file_str.c_str()))
# define DLSYM(obj, func) GetProcAddress(obj, func)
# define DLCLOSE(obj) FreeLibrary(obj)
# define DLERROR() ""
#else
# define DLOPEN(file_str) dlopen(file_str.c_str(), RTLD_LAZY)
# define DLSYM(obj, func) dlsym(obj, func)
# define DLCLOSE(obj) dlclose(obj)
# define DLERROR() dlerror()
#endif

// TODO: change to std::filesystem for C++17
Expand Down Expand Up @@ -76,6 +79,7 @@ std::vector<PluginData> ngraph::frontend::load_plugins(const std::string& dir_na
for (const auto& file : files) {
auto shared_object = DLOPEN(file);
if (!shared_object) {
NGRAPH_DEBUG << "Error loading FrontEnd " << file << " " << DLERROR() << std::endl;
continue;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,3 @@
#else
# define TF_API OPENVINO_CORE_IMPORTS
#endif // tensorflow_ngraph_frontend_EXPORTS

#define NGRAPH_VLOG(I) std::ostringstream()
95 changes: 51 additions & 44 deletions ngraph/frontend/tensorflow/src/decoder_proto.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@

#include "node_context.hpp"

using namespace std;

namespace ov {
namespace frontend {
namespace tf {
Expand All @@ -27,59 +29,69 @@ const std::map<::tensorflow::DataType, ov::element::Type>& TYPE_MAP() {
}
} // namespace

std::shared_ptr<ov::Variant> DecoderTFProto::get_attribute(const std::string& name,
const VariantTypeInfo& type_info) const {
template <class T>
bool is_type(const VariantTypeInfo& type_info) {
return type_info == VariantWrapper<T>::get_type_info_static();
}

template <class T>
shared_ptr<VariantWrapper<T>> create_variant(const T& data) {
return make_shared<VariantWrapper<T>>(data);
}

shared_ptr<Variant> DecoderTFProto::get_attribute(const string& name, const VariantTypeInfo& type_info) const {
auto attrs = decode_attribute_helper(name);
if (attrs.empty()) {
return nullptr;
}

if (type_info == VariantWrapper<std::string>::get_type_info_static()) {
return std::make_shared<VariantWrapper<std::string>>(attrs[0].s());
} else if (type_info == VariantWrapper<int64_t>::get_type_info_static()) {
return std::make_shared<VariantWrapper<int64_t>>(attrs[0].i());
} else if (type_info == VariantWrapper<std::vector<int64_t>>::get_type_info_static()) {
std::vector<int64_t> longs;
if (is_type<string>(type_info)) {
return create_variant<string>(attrs[0].s());
}
if (is_type<int64_t>(type_info)) {
return create_variant<int64_t>(attrs[0].i());
} else if (is_type<vector<int64_t>>(type_info)) {
vector<int64_t> longs;
longs.reserve(attrs[0].list().i_size());
for (size_t idx = 0; idx < attrs[0].list().i_size(); ++idx) {
longs.push_back(attrs[0].list().i(idx));
}
return std::make_shared<VariantWrapper<std::vector<int64_t>>>(longs);
} else if (type_info == VariantWrapper<int32_t>::get_type_info_static()) {
return std::make_shared<VariantWrapper<int32_t>>(static_cast<int32_t>(attrs[0].i()));
} else if (type_info == VariantWrapper<std::vector<int32_t>>::get_type_info_static()) {
std::vector<int32_t> ints;
return create_variant<vector<int64_t>>(longs);
} else if (is_type<int32_t>(type_info)) {
return create_variant<int32_t>(static_cast<int32_t>(attrs[0].i()));
} else if (is_type<vector<int32_t>>(type_info)) {
vector<int32_t> ints;
ints.reserve(attrs[0].list().i_size());
for (size_t idx = 0; idx < attrs[0].list().i_size(); ++idx) {
ints.push_back(static_cast<int32_t>(attrs[0].list().i(idx)));
}
return std::make_shared<VariantWrapper<std::vector<int32_t>>>(ints);
} else if (type_info == VariantWrapper<float>::get_type_info_static()) {
return std::make_shared<VariantWrapper<float>>(attrs[0].f());
} else if (type_info == VariantWrapper<std::vector<float>>::get_type_info_static()) {
std::vector<float> floats;
return create_variant<vector<int32_t>>(ints);
} else if (is_type<float>(type_info)) {
return create_variant<float>(attrs[0].f());
} else if (is_type<vector<float>>(type_info)) {
vector<float> floats;
floats.reserve(attrs[0].list().i_size());
for (size_t idx = 0; idx < attrs[0].list().i_size(); ++idx) {
floats.push_back(attrs[0].list().f(idx));
}
return std::make_shared<VariantWrapper<std::vector<float>>>(floats);
} else if (type_info == VariantWrapper<ov::element::Type>::get_type_info_static()) {
return create_variant<vector<float>>(floats);
} else if (is_type<ov::element::Type>(type_info)) {
auto data_type = attrs[0].type();
return std::make_shared<VariantWrapper<ov::element::Type>>(TYPE_MAP().at(data_type));
} else if (type_info == VariantWrapper<bool>::get_type_info_static()) {
return std::make_shared<VariantWrapper<bool>>(attrs[0].b());
} else if (type_info == VariantWrapper<::tensorflow::DataType>::get_type_info_static()) {
return std::make_shared<VariantWrapper<::tensorflow::DataType>>(attrs[0].type());
} else if (type_info == VariantWrapper<::tensorflow::TensorProto>::get_type_info_static()) {
return std::make_shared<VariantWrapper<::tensorflow::TensorProto>>(attrs[0].tensor());
} else if (type_info == VariantWrapper<::ov::PartialShape>::get_type_info_static()) {
std::vector<ov::Dimension> dims;
return create_variant<ov::element::Type>(TYPE_MAP().at(data_type));
} else if (is_type<bool>(type_info)) {
return create_variant<bool>(attrs[0].b());
} else if (is_type<::tensorflow::DataType>(type_info)) {
return create_variant<::tensorflow::DataType>(attrs[0].type());
} else if (is_type<::tensorflow::TensorProto>(type_info)) {
return create_variant<::tensorflow::TensorProto>(attrs[0].tensor());
} else if (is_type<::ov::PartialShape>(type_info)) {
vector<ov::Dimension> dims;
auto tf_shape = attrs[0].shape();
for (int i = 0; i < tf_shape.dim_size(); i++) {
dims.push_back(tf_shape.dim(i).size());
dims.emplace_back(tf_shape.dim(i).size());
}
auto pshape = ov::PartialShape(dims);
return std::make_shared<VariantWrapper<::ov::PartialShape>>(pshape);
return create_variant<::ov::PartialShape>(pshape);
}

// type is not supported by decoder
Expand All @@ -91,14 +103,14 @@ size_t DecoderTFProto::get_input_size() const {
}

void DecoderTFProto::get_input_node(size_t input_port_idx,
std::string& producer_name,
string& producer_name,
size_t& producer_output_port_index) const {
// TODO: handle body graph nodes with a couple of columns
std::string producer_port_name = m_node_def->input(input_port_idx);
string producer_port_name = m_node_def->input(input_port_idx);
auto delim_pos = producer_port_name.find(':');
if (delim_pos != std::string::npos) {
if (delim_pos != string::npos) {
producer_name = producer_port_name.substr(0, delim_pos);
producer_output_port_index = std::stoi(producer_port_name.substr(delim_pos));
producer_output_port_index = stoi(producer_port_name.substr(delim_pos));
return;
}
producer_name = producer_port_name;
Expand All @@ -113,16 +125,11 @@ const std::string& DecoderTFProto::get_op_name() const {
return m_node_def->name();
}

std::vector<::tensorflow::AttrValue> DecoderTFProto::decode_attribute_helper(const std::string& name) const {
vector<::tensorflow::AttrValue> DecoderTFProto::decode_attribute_helper(const string& name) const {
auto attr_map = m_node_def->attr();
FRONT_END_GENERAL_CHECK(attr_map.contains(name),
"An error occurred while parsing the ",
name,
" attribute of ",
this->get_op_type(),
"node");
auto value = m_node_def->attr().at(name);
return {value};
if (attr_map.contains(name))
return {m_node_def->attr().at(name)};
return {};
}
} // namespace tf
} // namespace frontend
Expand Down
3 changes: 2 additions & 1 deletion ngraph/frontend/tensorflow/src/exceptions.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
#pragma once

#include <frontend_manager/frontend_exceptions.hpp>
#include <openvino/core/node.hpp>

#include "openvino/core/node.hpp"

namespace ov {
namespace frontend {
Expand Down
24 changes: 8 additions & 16 deletions ngraph/frontend/tensorflow/src/frontend.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
// SPDX-License-Identifier: Apache-2.0
//

#include <openvino/util/common_util.hpp>
#include <tensorflow_frontend/frontend.hpp>
#include <tensorflow_frontend/graph_iterator.hpp>

#include "model.hpp"
#include "op_table.hpp"
#include "openvino/util/common_util.hpp"
#include "pass/transpose_sinking.hpp"
#include "tf_framework_node.hpp"
#include "utils.hpp"
Expand All @@ -34,7 +34,6 @@ void translate_framework_node(const std::shared_ptr<TFFrameworkNode>& node,

NodeContext node_ctx(*node->get_decoder(), named_inputs);
auto new_node_outputs = translator_it->second(node_ctx);
SetTracingInfo(node_ctx.get_name(), new_node_outputs.front());

auto new_output = new_node_outputs.begin();
auto old_outputs = node->outputs();
Expand Down Expand Up @@ -64,12 +63,11 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,
const auto& model_inputs = model_tf->get_inputs();
const auto& model_outputs = model_tf->get_outputs();
const auto& model_frozen_inputs = model_tf->get_tensor_values();

std::map<const std::string, const std::function<ov::OutputVector(const NodeContext&)>> translate_map;

const auto& TRANSLATE_OP_MAP = m_op_translators;
if (no_conversion) {
const std::set<std::string> required_types{"Placeholder", "_Retval", "NoOp"};
const std::set<std::string> required_types{"Placeholder", "NoOp"};
for (const auto& name : required_types) {
translate_map.emplace(name, TRANSLATE_OP_MAP.at(name));
}
Expand All @@ -85,7 +83,6 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,
"Input with frozen value has been already met: " + frozen_input_name);
ng_op_map[frozen_input_name] = {frozen_input_value};
}

// create parameter nodes for all tensor places corresponding to inputs
for (const auto& input_place : model_inputs) {
FRONT_END_GENERAL_CHECK(input_place->get_names().size() == 1, "Input place must have one name.");
Expand All @@ -98,17 +95,16 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,
auto input_shape = input_tensor_place->get_partial_shape();
auto input_type = input_tensor_place->get_element_type();

auto input_ng_output = ConstructNgNode<ov::opset8::Parameter>(input_name, input_type, input_shape);
auto input_ng_node = std::dynamic_pointer_cast<ov::opset8::Parameter>(input_ng_output.get_node_shared_ptr());
params.push_back(input_ng_node);
ng_op_map[input_name] = {input_ng_output};
auto param = std::make_shared<ov::opset8::Parameter>(input_type, input_shape);
set_node_name(input_name, param);
params.push_back(param);
ng_op_map[input_name] = {param};
}

// create the nGraph ops from TensorFlow ops
for (const auto& operation_place : operation_places) {
auto operation_decoder = operation_place->get_decoder();
auto operation_name = operation_place->get_names()[0];

// output for parameter nodes has been already generated
if (ng_op_map.count(operation_name)) {
continue;
Expand Down Expand Up @@ -180,7 +176,7 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,
auto ng_node = std::make_shared<TFFrameworkNode>(operation_decoder,
ng_inputs,
operation_place->get_output_ports().size());
SetTracingInfo(operation_name, ng_node);
set_node_name(operation_name, ng_node);
ng_outputs = ng_node->outputs();
}
}
Expand Down Expand Up @@ -251,7 +247,6 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,
results.push_back(std::make_shared<ov::opset8::Result>(node_outputs[producer_port_idx]));
}
}

// find all terminal nodes in ngraph graph to complete list of results
if (results.empty()) {
for (const auto& node_output_vector : ng_op_map) {
Expand All @@ -268,8 +263,7 @@ void FrontEndTF::translate_graph(const ngraph::frontend::InputModel::Ptr& model,

// create the nGraph function
ng_function = std::make_shared<ov::Function>(results, params, model_name);

NGRAPH_VLOG(5) << "Done with translations";
NGRAPH_DEBUG << "Done with translations";
}

/// \brief Check if FrontEndTensorflow can recognize model from given parts
Expand Down Expand Up @@ -313,11 +307,9 @@ ngraph::frontend::InputModel::Ptr FrontEndTF::load_impl(

std::shared_ptr<ov::Function> FrontEndTF::convert(ngraph::frontend::InputModel::Ptr model) const {
auto model_tf = std::dynamic_pointer_cast<InputModelTF>(model);

std::shared_ptr<ov::Function> f;
translate_graph(model_tf, "here_should_be_a_graph_name", true, false, f);
normalize(f);

// TODO: check that nGraph function does not contain operations which are not in the opset

return f;
Expand Down
6 changes: 2 additions & 4 deletions ngraph/frontend/tensorflow/src/ngraph_conversions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -10,27 +10,25 @@ namespace ov {
namespace frontend {
namespace tf {

void NHWCtoNCHW(const std::string& op_name, bool need_convert, ov::Output<ov::Node>& node) {
void convert_nhwc_to_nchw(const std::string& op_name, bool need_convert, ov::Output<ov::Node>& node) {
if (need_convert) {
auto rank = node.get_shape().size();
if (rank == 4) {
Transpose<0, 3, 1, 2>(node);
} else if (rank == 5) {
Transpose3D<0, 4, 1, 2, 3>(node);
}
SetTracingInfo(op_name, node);
}
}

void NCHWtoNHWC(const std::string& op_name, bool need_convert, ov::Output<ov::Node>& node) {
void convert_nchw_to_nhwc(const std::string& op_name, bool need_convert, ov::Output<ov::Node>& node) {
if (need_convert) {
auto rank = node.get_shape().size();
if (rank == 4) {
Transpose<0, 2, 3, 1>(node);
} else if (rank == 5) {
Transpose3D<0, 2, 3, 4, 1>(node);
}
SetTracingInfo(op_name, node);
}
}

Expand Down
Loading

0 comments on commit 5dacaa3

Please sign in to comment.