diff --git a/README.md b/README.md index 5e01d840ee..844f83c37b 100644 --- a/README.md +++ b/README.md @@ -142,6 +142,7 @@ For your convenience, we provide the setup script, `MIVisionX-setup.py`, which i --backend [MIVisionX Dependency Backend - optional (default:HIP) [options:HIP/OCL/CPU]] --rocm_path [ROCm Installation Path - optional (default:/opt/rocm ROCm Installation Required)] ``` + > [!NOTE] > * Install ROCm before running the setup script > * This script only needs to be executed once @@ -155,7 +156,7 @@ For your convenience, we provide the setup script, `MIVisionX-setup.py`, which i git clone https://github.com/ROCm/MIVisionX.git ``` -> [!NOTE] +> [!IMPORTANT] > MIVisionX has support for two GPU backends: **OPENCL** and **HIP** * Instructions for building MIVisionX with the **HIP** GPU backend (default backend): @@ -206,7 +207,7 @@ For your convenience, we provide the setup script, `MIVisionX-setup.py`, which i macOS [build instructions](https://github.com/ROCm/MIVisionX/wiki/macOS#macos-build-instructions) > [!IMPORTANT] -> MIVisionX CPU only backend is supported in macOS +> macOS only supports MIVisionX CPU backend ## Verify installation @@ -230,8 +231,9 @@ macOS [build instructions](https://github.com/ROCm/MIVisionX/wiki/macOS#macos-bu export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib runvx /opt/rocm/share/mivisionx/samples/gdf/canny.gdf ``` + > [!NOTE] -> * More samples are available [here](samples#samples) +> * More samples are available [here](samples/README.md#samples) > * For `macOS` use `export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/opt/rocm/lib` #### Verify with mivisionx-test package @@ -256,7 +258,7 @@ ctest -VV MIVisionX provides developers with docker images for Ubuntu `20.04` / `22.04`. Using docker images developers can quickly prototype and build applications without having to be locked into a single system setup or lose valuable time figuring out the dependencies of the underlying software. -Docker files to build MIVisionX containers and suggested workflow are [available](docker#mivisionx-docker) +Docker files to build MIVisionX containers and suggested workflow are [available](docker/README.md#mivisionx-docker) ### MIVisionX docker * [Ubuntu 20.04](https://cloud.docker.com/repository/docker/mivisionx/ubuntu-20.04) diff --git a/amd_openvx_extensions/amd_migraphx/README.md b/amd_openvx_extensions/amd_migraphx/README.md index e877c750e3..cac2c954e0 100644 --- a/amd_openvx_extensions/amd_migraphx/README.md +++ b/amd_openvx_extensions/amd_migraphx/README.md @@ -34,6 +34,6 @@ node com.amd.amd_migraphx_node model image_tensor output_tensor write output_tensor out_mnist.f32 ``` -For additional examples for using the `vx_amd_migraphx` extension, please see [amd_migraphx_test](https://github.com/ROCm/MIVisionX/tree/master/tests/amd_migraphx_test/) section. +For additional examples for using the `vx_amd_migraphx` extension, please see [amd_migraphx_tests](https://github.com/ROCm/MIVisionX/tree/master/tests/amd_migraphx_tests/) section. **NOTE:** OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc. diff --git a/amd_openvx_extensions/amd_winml/samples/README.md b/amd_openvx_extensions/amd_winml/samples/README.md index ebe1a189d2..3d20d854d7 100644 --- a/amd_openvx_extensions/amd_winml/samples/README.md +++ b/amd_openvx_extensions/amd_winml/samples/README.md @@ -4,8 +4,8 @@ Get ONNX models from [ONNX Model Zoo](https://github.com/onnx/models) ## Sample - SqueezeNet -* Download the [SqueezeNet](https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz) ONNX Model -* Use [Netron](https://lutzroeder.github.io/netron/) to open the model.onnx +* Download the [SqueezeNet](https://github.com/onnx/models/tree/main/validated/vision/classification/squeezenet#squeezenet) ONNX Model +* Use [Netron](https://github.com/lutzroeder/netron) to open the model.onnx * Look at Model Properties to find Input & Output Tensor Name (data_0 - input; softmaxout_1 - output) * Look at output tensor dimensions (n,c,h,w - [1,1000,1,1] for softmaxout_1) * Use the label file - [data\Labels.txt](data/Labels.txt) and sample image - data\car.JPEG to run samples @@ -91,7 +91,7 @@ data labelLocation = scalar:STRING,FULL_PATH_TO\data\Labels.txt ## Sample - FER+ Emotion Recognition * Download the [FER+ Emotion Recognition](https://onnxzoo.blob.core.windows.net/models/opset_8/emotion_ferplus/emotion_ferplus.tar.gz) ONNX Model -* Use [Netron](https://lutzroeder.github.io/netron/) to open the model.onnx +* Use [Netron](https://github.com/lutzroeder/netron) to open the model.onnx * Look at Model Properties to find Input & Output Tensor Name (Input3 - input; Plus692_Output_0 - output) * Look at output tensor dimensions (n,c,h,w - [1,8] for Plus692_Output_0) * Use the label file - [data/emotions.txt](data/emotions.txt) to run sample diff --git a/apps/README.md b/apps/README.md index 36dd55c01b..279f3ebf5d 100644 --- a/apps/README.md +++ b/apps/README.md @@ -3,7 +3,7 @@ MIVisionX has several applications built on top of OpenVX and its modules, it uses AMD optimized libraries to build applications that can be used as prototypes or used as models to develop products. ## Prerequisites -* [MIVisionX](https://github.com/ROCm/MIVisionX/README.md#build--install-mivisionx) installed +* [MIVisionX](https://github.com/ROCm/MIVisionX/blob/master/README.md#prerequisites) installed ## Bubble Pop diff --git a/apps/dg_test/README.md b/apps/dg_test/README.md index 31d15ffa44..025092c594 100644 --- a/apps/dg_test/README.md +++ b/apps/dg_test/README.md @@ -55,7 +55,7 @@ See the below section for using your caffemodel. ### Testing with your Caffemodel -You can test your trained MNIST caffemodel using the [model compiler](https://github.com/ROCm/amdovx-modules/tree/develop/utils/model_compiler) +You can test your trained MNIST caffemodel using the [model compiler](https://github.com/ROCm/MIVisionX/tree/master/model_compiler) 1. Convert your caffemodel->NNIR->openvx using the model compiler. 2. From the generated files, copy diff --git a/apps/mivisionx_winml_classifier/README.md b/apps/mivisionx_winml_classifier/README.md index 49fbbc54fb..0b03dd7573 100644 --- a/apps/mivisionx_winml_classifier/README.md +++ b/apps/mivisionx_winml_classifier/README.md @@ -50,11 +50,14 @@ This application is a sample for developing windows application using MIVisionX ## MIVisionX Image Classification -![MIVisionX Image Classification](images/MIVisionX-ImageClassification.png) +![MIVisionX Image Classification](https://raw.githubusercontent.com/ROCm/MIVisionX/master/apps/mivisionx_winml_classifier/images/MIVisionX-ImageClassification.png) + ## MIVisionX Image Classification using WinML -![MIVisionX Image Classification using WinML](images/MIVisionX-ImageClassification-WinML.png) +![MIVisionX Image Classification](https://raw.githubusercontent.com/ROCm/MIVisionX/master/apps/mivisionx_winml_classifier/images/MIVisionX-ImageClassification-WinML.png) + + Example: diff --git a/docs/doxygen/Doxyfile b/docs/doxygen/Doxyfile index 8d351e378f..ad025f104a 100644 --- a/docs/doxygen/Doxyfile +++ b/docs/doxygen/Doxyfile @@ -943,8 +943,8 @@ WARN_LOGFILE = # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. -INPUT = ../../README.md \ - ../../amd_openvx/openvx/include/VX/vx.h \ +#INPUT = ../README.md \ +INPUT = ../../amd_openvx/openvx/include/VX/vx.h \ ../../amd_openvx/openvx/include/VX/vx_api.h \ ../../amd_openvx/openvx/include/VX/vx_compatibility.h \ ../../amd_openvx/openvx/include/VX/vx_kernels.h \ @@ -1171,7 +1171,8 @@ FILTER_SOURCE_PATTERNS = # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. -USE_MDFILE_AS_MAINPAGE = README.md +#USE_MDFILE_AS_MAINPAGE = README.md +USE_MDFILE_AS_MAINPAGE = # The Fortran standard specifies that for fixed formatted Fortran code all # characters from position 72 are to be considered as comment. A common diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 9193136f3d..9118e95f81 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -1,6 +1,6 @@ # Anywhere {branch} is used, the branch name will be substituted. # These comments will also be removed. -root: doxygen/html/index +root: README subtrees: - numbered: False entries: diff --git a/samples/loom_360_stitch/README.md b/samples/loom_360_stitch/README.md index a57acc2be4..de41196986 100644 --- a/samples/loom_360_stitch/README.md +++ b/samples/loom_360_stitch/README.md @@ -2,7 +2,7 @@ MIVisionX samples using [LoomShell](https://github.com/ROCm/MIVisionX/tree/master/utilities/loom_shell#radeon-loomshell) -[![Loom Stitch](https://raw.githubusercontent.com/ROCm/MIVisionX/master/docs/data/loom-4.png)](https://youtu.be/E8pPU04iZjw) +[![Loom Stitch](https://raw.githubusercontent.com/ROCm/MIVisionX/master/docs/data/LOOM_LOGO_250X125.png)](https://youtu.be/E8pPU04iZjw) **Note:** diff --git a/utilities/runvx/README.md b/utilities/runvx/README.md index 27d6ff39e4..43ccfcdfe5 100644 --- a/utilities/runvx/README.md +++ b/utilities/runvx/README.md @@ -340,7 +340,7 @@ If available, this project uses OpenCV for camera capture and image display. Here are few examples that demonstrate use of RUNVX prototyping tool. ### Canny Edge Detector -This example demonstrates building OpenVX graph for Canny edge detector. Use [face1.jpg](https://raw.githubusercontent.com/ROCm/amdovx-core/master/examples/images/face1.jpg) for this example. +This example demonstrates building OpenVX graph for Canny edge detector. Use [face1.jpg](https://raw.githubusercontent.com/ROCm/MIVisionX/master/samples/images/face1.jpg) for this example. % runvx[.exe] file canny.gdf @@ -367,7 +367,7 @@ File **canny.gdf**: node org.khronos.openvx.canny_edge_detector luma hyst gradient_size !NORM_L1 output ### Skintone Pixel Detector -This example demonstrates building OpenVX graph for pixel-based skin tone detector [Peer et al. 2003]. Use [face1.jpg](https://raw.githubusercontent.com/ROCm/amdovx-core/master/examples/images/face1.jpg) for this example. +This example demonstrates building OpenVX graph for pixel-based skin tone detector [Peer et al. 2003]. Use [face1.jpg](https://raw.githubusercontent.com/ROCm/MIVisionX/master/samples/images/face1.jpg) for this example. % runvx[.exe] file skintonedetect.gdf