Skip to content

Commit 0508a66

Browse files
authored
Fix broken and redirected links (deepjavalibrary#1647)
1 parent 0bb1a92 commit 0508a66

File tree

5 files changed

+11
-5
lines changed

5 files changed

+11
-5
lines changed

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ To send us a pull request, please:
3737
6. Send us a pull request, answering any default questions in the pull request interface.
3838
7. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
3939

40-
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
41-
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
40+
GitHub provides additional document on [forking a repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) and
41+
[creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request).
4242

4343

4444
## Finding contributions to work on

docs/forums.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ If you want to talk about the development of DJL itself, look at our [developmen
3434

3535
## Pull Request
3636

37-
If you have an idea that you want to implement for changes to DJL, a bug fix, new datasets, new models, or anything else, open a new [pull request](https://github.com/deepjavalibrary/djl/compare). You can view this guide on [git and how to fork the project and make a pull request](https://guides.github.com/activities/forking/). We also have [documentation for contributors](development/README.md) that can help setup development, explain DJL coding conventions, working with DJL CI, and troubleshooting common problems.
37+
If you have an idea that you want to implement for changes to DJL, a bug fix, new datasets, new models, or anything else, open a new [pull request](https://github.com/deepjavalibrary/djl/compare). You can view this guide on [git and how to fork the project and make a pull request](https://docs.github.com/en/get-started/quickstart/contributing-to-projects). We also have [documentation for contributors](development/README.md) that can help setup development, explain DJL coding conventions, working with DJL CI, and troubleshooting common problems.
3838

3939
## Follow DJL
4040

docs/load_model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ Criteria<Image, Classifications> criteria = Criteria.builder()
130130
ZooModel<Image, Classifications> model = criteria.loadModel();
131131
```
132132

133-
You can [customize the artifactId and modelName](#customize-artifactid-and-modelname) the same way as loading model from the local file system.
133+
You can customize the artifactId and modelName the same way as loading model from the local file system.
134134

135135
### Load model from AWS S3 bucket
136136
DJL supports loading a model from an S3 bucket using `s3://` URL and the AWS plugin. See [here](../extensions/aws-ai/README.md) for details.

engines/mxnet/mxnet-engine/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ The javadocs output is built in the `build/doc/javadoc` folder.
2626

2727

2828
## Installation
29+
2930
You can pull the MXNet engine from the central Maven repository by including the following dependency:
3031

3132
```xml
@@ -43,6 +44,7 @@ It will automatically determine the appropriate jars for your system based on th
4344
You can choose a native library based on your platform if you don't have network access at runtime.
4445

4546
### macOS
47+
4648
For macOS, you can use the following library:
4749

4850
- ai.djl.mxnet:mxnet-native-mkl:1.8.0:osx-x86_64
@@ -59,6 +61,7 @@ For macOS, you can use the following library:
5961
```
6062

6163
### Linux
64+
6265
For the Linux platform, you can choose between CPU, GPU. If you have Nvidia [CUDA](https://en.wikipedia.org/wiki/CUDA)
6366
installed on your GPU machine, you can use one of the following library:
6467

@@ -115,7 +118,7 @@ DJL on Windows, please download and install
115118

116119
For the Windows platform, you can use CPU package. MXNet windows GPU native
117120
library size are large, we no longer provide GPU package, instead you have to
118-
use [Automatic](#automatic-(recommended)) package.
121+
use the Automatic package.
119122

120123
#### Windows GPU
121124

extensions/benchmark/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@ gradlew benchmark --args="--help"
7777
```
7878

7979
## Prerequisite
80+
8081
Please ensure Java 8+ is installed and you are using an OS that DJL supported with.
8182

8283
After that, you need to clone the djl project and `cd` into the folder.
@@ -93,6 +94,7 @@ If you are trying to use GPU, please ensure the CUDA driver is installed. You ca
9394
```
9495
nvcc -V
9596
```
97+
9698
to checkout the version. For different Deep Learning engine you are trying to run the benchmark,
9799
they have different CUDA version to support. Please check the individual Engine documentation to ensure your CUDA version is supported.
98100

@@ -289,6 +291,7 @@ You can also do multi-threading inference with DJL. For example, if you would li
289291
```
290292
-t 10
291293
```
294+
292295
Best thread number for your system: The same number of cores your system have or double of the total cores.
293296

294297
You can also add `-l` to simulate the increment load for your inference server. It will add threads with the delay of time.

0 commit comments

Comments
 (0)