Skip to content

Commit 609695d

Browse files
authored
Add GitHub action to format and lint code (#71)
* Add and run pre-commit hooks * Restore clang-format * Fix yaml spacing * Normalize spacing * Fix indentation of pre-commit-config.yaml * Clang to enforce 80 chars, pre-commit all PRs * Update copyrights * Remove extra line
1 parent ab9bd14 commit 609695d

11 files changed

+181
-19
lines changed

.clang-format

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
BasedOnStyle: Google
33

44
IndentWidth: 2
5+
ColumnLimit: 80
56
ContinuationIndentWidth: 4
67
UseTab: Never
78
MaxEmptyLinesToKeep: 2

.github/workflows/pre-commit.yml

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
#
3+
# Redistribution and use in source and binary forms, with or without
4+
# modification, are permitted provided that the following conditions
5+
# are met:
6+
# * Redistributions of source code must retain the above copyright
7+
# notice, this list of conditions and the following disclaimer.
8+
# * Redistributions in binary form must reproduce the above copyright
9+
# notice, this list of conditions and the following disclaimer in the
10+
# documentation and/or other materials provided with the distribution.
11+
# * Neither the name of NVIDIA CORPORATION nor the names of its
12+
# contributors may be used to endorse or promote products derived
13+
# from this software without specific prior written permission.
14+
#
15+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
16+
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17+
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
18+
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
19+
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
20+
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
21+
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
22+
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
23+
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
25+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26+
27+
name: pre-commit
28+
29+
on:
30+
pull_request:
31+
32+
jobs:
33+
pre-commit:
34+
runs-on: ubuntu-22.04
35+
steps:
36+
- uses: actions/checkout@v3
37+
- uses: actions/setup-python@v3
38+
- uses: pre-commit/[email protected]
39+

.pre-commit-config.yaml

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
#
3+
# Redistribution and use in source and binary forms, with or without
4+
# modification, are permitted provided that the following conditions
5+
# are met:
6+
# * Redistributions of source code must retain the above copyright
7+
# notice, this list of conditions and the following disclaimer.
8+
# * Redistributions in binary form must reproduce the above copyright
9+
# notice, this list of conditions and the following disclaimer in the
10+
# documentation and/or other materials provided with the distribution.
11+
# * Neither the name of NVIDIA CORPORATION nor the names of its
12+
# contributors may be used to endorse or promote products derived
13+
# from this software without specific prior written permission.
14+
#
15+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
16+
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17+
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
18+
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
19+
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
20+
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
21+
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
22+
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
23+
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
25+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26+
27+
repos:
28+
- repo: https://github.com/timothycrosley/isort
29+
rev: 5.12.0
30+
hooks:
31+
- id: isort
32+
additional_dependencies: [toml]
33+
- repo: https://github.com/psf/black
34+
rev: 23.1.0
35+
hooks:
36+
- id: black
37+
types_or: [python, cython]
38+
- repo: https://github.com/PyCQA/flake8
39+
rev: 5.0.4
40+
hooks:
41+
- id: flake8
42+
args: [--max-line-length=88, --select=C,E,F,W,B,B950, --extend-ignore = E203,E501]
43+
types_or: [python, cython]
44+
- repo: https://github.com/pre-commit/mirrors-clang-format
45+
rev: v16.0.5
46+
hooks:
47+
- id: clang-format
48+
types_or: [c, c++, cuda, proto, textproto, java]
49+
args: ["-fallback-style=none", "-style=file", "-i"]
50+
- repo: https://github.com/codespell-project/codespell
51+
rev: v2.2.4
52+
hooks:
53+
- id: codespell
54+
additional_dependencies: [tomli]
55+
args: ["--toml", "pyproject.toml"]
56+
exclude: (?x)^(.*stemmer.*|.*stop_words.*|^CHANGELOG.md$)
57+
# More details about these pre-commit hooks here:
58+
# https://pre-commit.com/hooks.html
59+
- repo: https://github.com/pre-commit/pre-commit-hooks
60+
rev: v4.4.0
61+
hooks:
62+
- id: check-case-conflict
63+
- id: check-executables-have-shebangs
64+
- id: check-merge-conflict
65+
- id: check-json
66+
- id: check-toml
67+
- id: check-yaml
68+
- id: check-shebang-scripts-are-executable
69+
- id: end-of-file-fixer
70+
types_or: [c, c++, cuda, proto, textproto, java, python]
71+
- id: mixed-line-ending
72+
- id: requirements-txt-fixer
73+
- id: trailing-whitespace

CMakeLists.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Copyright 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
1+
# Copyright 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
22
#
33
# Redistribution and use in source and binary forms, with or without
44
# modification, are permitted provided that the following conditions
@@ -51,7 +51,7 @@ set(TRITON_TENSORRT_BACKEND_INSTALLDIR ${CMAKE_INSTALL_PREFIX}/backends/tensorrt
5151
#
5252
# Dependencies
5353
#
54-
# FetchContent's composibility isn't very good. We must include the
54+
# FetchContent's composability isn't very good. We must include the
5555
# transitive closure of all repos so that we can override the tag.
5656
#
5757
include(FetchContent)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!--
2-
# Copyright 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# Copyright 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
#
44
# Redistribution and use in source and binary forms, with or without
55
# modification, are permitted provided that the following conditions
@@ -30,7 +30,7 @@
3030

3131
# TensorRT Backend
3232

33-
The Triton backend for [TensorRT](https://github.com/NVIDIA/TensorRT).
33+
The Triton backend for [TensorRT](https://github.com/NVIDIA/TensorRT).
3434
You can learn more about Triton backends in the [backend
3535
repo](https://github.com/triton-inference-server/backend). Ask
3636
questions or report problems on the [issues

pyproject.toml

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
#
3+
# Redistribution and use in source and binary forms, with or without
4+
# modification, are permitted provided that the following conditions
5+
# are met:
6+
# * Redistributions of source code must retain the above copyright
7+
# notice, this list of conditions and the following disclaimer.
8+
# * Redistributions in binary form must reproduce the above copyright
9+
# notice, this list of conditions and the following disclaimer in the
10+
# documentation and/or other materials provided with the distribution.
11+
# * Neither the name of NVIDIA CORPORATION nor the names of its
12+
# contributors may be used to endorse or promote products derived
13+
# from this software without specific prior written permission.
14+
#
15+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
16+
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17+
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
18+
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
19+
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
20+
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
21+
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
22+
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
23+
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
25+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26+
27+
[tool.codespell]
28+
# note: pre-commit passes explicit lists of files here, which this skip file list doesn't override -
29+
# this is only to allow you to run codespell interactively
30+
skip = "./.git,./.github"
31+
# ignore short words, and typename parameters like OffsetT
32+
ignore-regex = "\\b(.{1,4}|[A-Z]\\w*T)\\b"
33+
# use the 'clear' dictionary for unambiguous spelling mistakes
34+
builtin = "clear"
35+
# disable warnings about binary files and wrong encoding
36+
quiet-level = 3
37+
38+
[tool.isort]
39+
profile = "black"
40+
use_parentheses = true
41+
multi_line_output = 3
42+
include_trailing_comma = true
43+
force_grid_wrap = 0
44+
ensure_newline_before_comments = true
45+
line_length = 88
46+
balanced_wrapping = true
47+
indent = " "
48+
skip = ["build"]
49+

src/instance_state.cc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1071,7 +1071,7 @@ ModelInstanceState::Run(
10711071
io_binding_info.batch_output_ = model_state_->FindBatchOutput(name);
10721072

10731073
// Process the output tensors with pinned memory address if zero-copy is
1074-
// supported, otherwise use device memory. Peform memory copies
1074+
// supported, otherwise use device memory. Perform memory copies
10751075
// asynchronously and wait for model execution.
10761076
payload_->responder_->ProcessBatchOutput(
10771077
name, *(io_binding_info.batch_output_),
@@ -1116,7 +1116,7 @@ ModelInstanceState::Run(
11161116

11171117
if (io_binding_info.is_requested_output_tensor_) {
11181118
// Process the output tensors with pinned memory address if zero-copy is
1119-
// supported, otherwise use device memory. Peform memory copies
1119+
// supported, otherwise use device memory. Perform memory copies
11201120
// asynchronously and wait for model execution.
11211121
payload_->responder_->ProcessTensor(
11221122
name, dt, batchn_shape,
@@ -2244,7 +2244,7 @@ ModelInstanceState::InitializeBatchInputBindings(
22442244
}
22452245
} else {
22462246
// For most type 'dims' will be empty as the full shape
2247-
// of the batch input is [-1] which will be coverred by
2247+
// of the batch input is [-1] which will be covered by
22482248
// batch dimension.
22492249
switch (batch_input.BatchInputKind()) {
22502250
case BatchInput::Kind::BATCH_ELEMENT_COUNT:
@@ -2481,7 +2481,7 @@ ModelInstanceState::InitializeConfigShapeOutputBindings(
24812481
// [DLIS-4283] review below comment
24822482
// Allocate CUDA memory. Use cudaHostAlloc if zero copy
24832483
// supported. We rely on buffer_bindings_ being non-nullptr to
2484-
// indicate that the buffer has been correctly initalized so
2484+
// indicate that the buffer has been correctly initialized so
24852485
// even for zero-sized tensors always allocate something.
24862486
void* buffer = nullptr;
24872487
cudaError_t err = cudaSuccess;
@@ -2727,7 +2727,7 @@ ModelInstanceState::InitializeExecuteInputBinding(
27272727

27282728
// Allocate CUDA memory. Use cudaHostAlloc if zero copy supported.
27292729
// We rely on buffer_bindings_ being non-nullptr to indicate that
2730-
// the buffer has been correctly initalized so even for zero-sized
2730+
// the buffer has been correctly initialized so even for zero-sized
27312731
// tensors always allocate something.
27322732
void* buffer = nullptr;
27332733
cudaError_t err = cudaSuccess;
@@ -2901,7 +2901,7 @@ ModelInstanceState::InitializeExecuteOutputBinding(
29012901

29022902
// Allocate CUDA memory. Use cudaHostAlloc if zero copy supported.
29032903
// We rely on buffer_bindings_ being non-nullptr to indicate that
2904-
// the buffer has been correctly initalized so even for zero-sized
2904+
// the buffer has been correctly initialized so even for zero-sized
29052905
// tensors always allocate something.
29062906
void* buffer = nullptr;
29072907
cudaError_t err = cudaSuccess;
@@ -3111,7 +3111,7 @@ ModelInstanceState::InitializeShapeInputBinding(
31113111
if (max_byte_size != 0) {
31123112
// Allocate CUDA memory. Use cudaHostAlloc if zero copy supported.
31133113
// We rely on buffer_bindings_ being non-nullptr to indicate that
3114-
// the buffer has been correctly initalized so even for zero-sized
3114+
// the buffer has been correctly initialized so even for zero-sized
31153115
// tensors always allocate something.
31163116
void* buffer = nullptr;
31173117
cudaError_t err = cudaSuccess;
@@ -4131,7 +4131,7 @@ TRTv3Interface::SetBindingDimensions(
41314131
}
41324132

41334133
if (!trt_context.is_dynamic_per_binding_[io_index]) {
4134-
// No need to set dimension for the binding that does not inlcude
4134+
// No need to set dimension for the binding that does not include
41354135
// dynamic shape.
41364136
return nullptr;
41374137
}

src/instance_state.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// Copyright 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
1+
// Copyright 2022-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
22
//
33
// Redistribution and use in source and binary forms, with or without
44
// modification, are permitted provided that the following conditions
@@ -111,7 +111,7 @@ struct TensorRTContext {
111111

112112
// The key is packed input dimensions prepended by batch size, so
113113
// that uniqueness is guaranteed and the CUDA graphs are sorted to
114-
// provide convinence to find the closest CUDA graph in the
114+
// provide convenience to find the closest CUDA graph in the
115115
// future.
116116
//
117117
// vector is used to map index of event sets to corresponding
@@ -181,7 +181,7 @@ class TRTInterface {
181181
virtual bool Enqueue(nvinfer1::IExecutionContext* context) = 0;
182182

183183
// This function will be called to specify the runtime shape of the input and
184-
// adding metadata into exisiting 'cuda_graph_key' for graph lookup.
184+
// adding metadata into existing 'cuda_graph_key' for graph lookup.
185185
virtual TRITONSERVER_Error* SetBindingDimensions(
186186
const std::string& input_name, const std::vector<int64_t>& shape,
187187
const TensorRTContext& trt_context, const size_t io_index,

src/model_state.cc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@ ModelState::CreateEngine(
240240
.c_str());
241241

242242
if (IsEngineSharingEnabled()) {
243-
// This logic runs atleast once to validate whether the engine
243+
// This logic runs at least once to validate whether the engine
244244
// can be shared.
245245
bool is_dynamic = false;
246246
for (int idx = 0; idx < eit->second.second->getNbBindings(); idx++) {
@@ -929,4 +929,4 @@ ModelState::FixIO(
929929
return nullptr;
930930
}
931931

932-
}}} // namespace triton::backend::tensorrt
932+
}}} // namespace triton::backend::tensorrt

src/model_state.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ class ModelState : public TensorRTModel {
7575

7676
// Query the execution arbitrator to return the instance for the execution and
7777
// the semaphore to check whether the next execution should be initiated.
78-
// 'device_id', 'instance' are the metadata assoicated with the
78+
// 'device_id', 'instance' are the metadata associated with the
7979
// TRITONBACKEND_ModelInstance.
8080
std::pair<ModelInstanceState*, Semaphore*> ExecutionState(
8181
const int device_id, ModelInstanceState* instance)

src/tensorrt.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ TRITONBACKEND_Initialize(TRITONBACKEND_Backend* backend)
111111
RETURN_IF_ERROR(backend_config.Parse(buffer, byte_size));
112112
}
113113

114-
// Default execution policy, may be overriden by backend config
114+
// Default execution policy, may be overridden by backend config
115115
auto execution_policy = TRITONBACKEND_EXECUTION_DEVICE_BLOCKING;
116116

117117
std::unique_ptr<BackendConfiguration> lconfig(new BackendConfiguration());

0 commit comments

Comments
 (0)