Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practice #3039

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions GOVERNANCE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
<!--- SPDX-License-Identifier: Apache-2.0 -->

# Governance

The overall governance of the ONNX-MLIR project is described at https://github.com/onnx/onnx/blob/main/community/readme.md#onnx-open-governance.
The ONNX-MLIR project is under the purview of the Compilers Special Interest Group (Compilers SIG).
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,11 @@ Practically, each `git commit` needs to be signed, see [here](docs/Workflow.md#s

The ONNX-MLIR code of conduct is described at https://onnx.ai/codeofconduct.html.

## Adopters
<!-- Please open a PR to add your company/product here. -->

* IBM [zDLC compiler](https://github.com/IBM/zDLC) uses onnx-mlir technology to transform ONNX models into executable binary for [IBM Telum](https://www.ibm.com/z/telum) servers.

## Projects related/using onnx-mlir

* The [onnx-mlir-serving](https://github.com/IBM/onnx-mlir-serving) project implements a GRPC server written with C++ to serve onnx-mlir compiled models. Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput.
Loading