Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,15 @@ docs/docs/06-api-reference/
# integration test model assets
packages/react-native-executorch/common/rnexecutorch/tests/integration/assets/models/

# release artifact staging dir (produced by scripts/package-release-artifacts.sh)
packages/react-native-executorch/dist-artifacts/

# on-demand native libs (downloaded at postinstall time, not committed)
packages/react-native-executorch/third-party/android/libs/
packages/react-native-executorch/third-party/ios/ExecutorchLib.xcframework/
packages/react-native-executorch/third-party/ios/libs/
packages/react-native-executorch/rne-build-config.json

# custom
*.tgz
Makefile
Expand Down
36 changes: 36 additions & 0 deletions docs/docs/01-fundamentals/01-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,42 @@ Installation is pretty straightforward, use your package manager of choice to in
</TabItem>
</Tabs>

### Configuring backends and extras

On install, `react-native-executorch` runs a `postinstall` script that downloads prebuilt native libraries from the matching GitHub Release and unpacks them under `third-party/`. By default every optional feature is included — which keeps the app binary large. You can opt out of anything you don't need by adding an `extras` array to your app's `package.json`:

```json
{
"react-native-executorch": {
"extras": ["xnnpack", "coreml", "vulkan", "opencv", "phonemizer"]
}
}
```

If the `extras` key is omitted, all five features are enabled. To disable a feature, drop its name from the array. If you only need LLM inference with XNNPACK on iOS, for example, set `"extras": ["xnnpack"]`.

| Extra | Platforms | What it enables |
| ------------ | ------------ | ------------------------------------------------------------- |
| `xnnpack` | iOS, Android | XNNPACK CPU backend (required for most quantized models) |
| `coreml` | iOS | Core ML backend (Apple Neural Engine / GPU acceleration) |
| `vulkan` | Android | Vulkan GPU backend |
| `opencv` | iOS, Android | Computer-vision models (classification, detection, OCR, etc.) |
| `phonemizer` | iOS, Android | Text-to-speech models |

Source files and native libraries are excluded from compilation when an extra is disabled, so builds that only need LLMs can skip OpenCV and cut tens of megabytes off the final binary.

The postinstall step honors a few environment variables:

| Variable | Purpose |
| ---------------------- | ------------------------------------------------------------------------- |
| `RNET_SKIP_DOWNLOAD=1` | Skip the download entirely (for CI with pre-cached libraries). |
| `RNET_LIBS_CACHE_DIR` | Custom cache directory (default: `~/.cache/react-native-executorch/<v>`). |
| `RNET_TARGET` | Force a specific target, e.g. `android-arm64-v8a` or `ios`. |
| `RNET_NO_X86_64=1` | Skip the Android x86_64 tarball (handy when only building for a device). |
| `GITHUB_TOKEN` | Required to access draft releases while iterating on a new version. |

After changing `extras`, re-run `yarn install` (or the equivalent) so the postinstall script regenerates `rne-build-config.json` and re-extracts the right tarballs, then rebuild the native project.

:::warning
Before using any other API, you must call `initExecutorch` with a resource fetcher adapter at the entry point of your app:

Expand Down
168 changes: 168 additions & 0 deletions packages/react-native-executorch/NATIVE_LIBS_PIPELINE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# Native libraries pipeline

This document describes how native dependencies (ExecuTorch runtime, backends, OpenCV, phonemizer) are produced, shipped, and stitched into an app build. It is intended for maintainers — the user-facing summary lives in `docs/docs/01-fundamentals/01-getting-started.md`.

## High-level flow

```
┌──────────────────────┐ ┌────────────────────────┐ ┌───────────────────────┐
│ ExecuTorch fork │ ───▶ │ GitHub Release v<ver> │ ───▶ │ postinstall script │
│ + our patches │ │ <artifact>.tar.gz │ │ download-libs.js │
│ (separate repo) │ │ <artifact>.tar.gz.256 │ │ │
└──────────────────────┘ └────────────────────────┘ └───────────┬───────────┘
┌───────────────────────┐
│ third-party/android │
│ third-party/ios │
│ rne-build-config.json│
└───────────┬───────────┘
┌───────────────────────────┴────────────────────────────┐
▼ ▼
┌───────────────────────┐ ┌─────────────────────────┐
│ android/build.gradle │ │ react-native-executorch │
│ + CMakeLists.txt │ │ .podspec │
│ -DRNE_ENABLE_* │ │ -DRNE_ENABLE_* │
└───────────────────────┘ │ force_load xcframeworks │
└─────────────────────────┘
```

## Install-time: `scripts/download-libs.js`

Runs at `postinstall`. Responsibilities:

1. Read `react-native-executorch.extras` from the app's `package.json` (uses `INIT_CWD`). Defaults to `["opencv", "phonemizer", "xnnpack", "coreml", "vulkan"]`.
2. Write `rne-build-config.json` at the package root with boolean flags — this file is the single source of truth consumed by both the Gradle build and the podspec.
3. Detect targets (`ios` on macOS; always `android-arm64-v8a` and, unless `RNET_NO_X86_64` is set, `android-x86_64`).
4. For each target × enabled extra, fetch the corresponding `<artifact>.tar.gz` from the GitHub Release tagged `v${PACKAGE_VERSION}`, verify the `.sha256`, and extract into `third-party/android/libs/` or `third-party/ios/`.
5. Cache validated tarballs under `~/.cache/react-native-executorch/<version>/` so subsequent installs skip the network.

Environment overrides: `RNET_SKIP_DOWNLOAD`, `RNET_LIBS_CACHE_DIR`, `RNET_TARGET`, `RNET_BASE_URL` (useful with `python3 -m http.server` against `dist-artifacts/` for local iteration), `GITHUB_TOKEN` (needed for draft releases).

The set of artifacts per target is defined in `getArtifacts()`:

| Artifact name | Target | Produced by | Contents |
| ---------------------------------------- | ------- | ---------------------------------------- | ------------------------------------------------------------- |
| `core-android-arm64-v8a` | Android | ExecuTorch fork build | `libexecutorch.so` (backends baked in), headers |
| `core-android-x86_64` | Android | ExecuTorch fork build | x86_64 `libexecutorch.so` for the simulator |
| `core-ios` | iOS | `third-party/ios/ExecutorchLib/build.sh` | `ExecutorchLib.xcframework` |
| `xnnpack-ios` | iOS | `third-party/ios/ExecutorchLib/build.sh` | `XnnpackBackend.xcframework` |
| `coreml-ios` | iOS | `third-party/ios/ExecutorchLib/build.sh` | `CoreMLBackend.xcframework` |
| `xnnpack-android-*`, `vulkan-android-*` | Android | (currently baked into core) | Placeholder tarballs — backends are inside `libexecutorch.so` |
| `opencv-android-*` | Android | OpenCV release process | Static OpenCV + KleidiCV HAL |
| `phonemizer-android-*`, `phonemizer-ios` | both | phonemizer build | `libphonemis.a` (iOS: physical + simulator) |

(`opencv-ios` is not a tarball — iOS consumes OpenCV through the `opencv-rne` CocoaPod.)

## Build-time: Android

`android/build.gradle` reads `rne-build-config.json` once and forwards the booleans to CMake:

```groovy
"-DRNE_ENABLE_OPENCV=${rneBuildConfig.enableOpencv ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_PHONEMIZER=${rneBuildConfig.enablePhonemizer ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_XNNPACK=${rneBuildConfig.enableXnnpack ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_VULKAN=${rneBuildConfig.enableVulkan ? 'ON' : 'OFF'}"
```

`android/CMakeLists.txt` and `android/src/main/cpp/CMakeLists.txt` respond by:

- Adding `-DRNE_ENABLE_OPENCV` / `-DRNE_ENABLE_PHONEMIZER` compile definitions so C++ code can `#ifdef` around optional dependencies.
- Conditionally linking `libopencv_*.a`, KleidiCV HAL (arm64 only), and `libphonemis.a`.
- Always linking against the single prebuilt `libexecutorch.so` downloaded into `third-party/android/libs/executorch/<abi>/`.

**Backends (XNNPACK, Vulkan) are NOT separate shared libraries on Android.** They are compiled into `libexecutorch.so` during the ExecuTorch fork build. `RNE_ENABLE_XNNPACK`/`RNE_ENABLE_VULKAN` today act only as feature flags for C++ code paths that reference backend-specific APIs — they do not toggle native library linkage. This is a deliberate simplification (see the "Why backends differ" section below).

## Build-time: iOS

`react-native-executorch.podspec` reads the same `rne-build-config.json` and:

- Excludes opencv/phonemizer C++ sources from compilation when those extras are off.
- Appends `-DRNE_ENABLE_*` to `OTHER_CPLUSPLUSFLAGS`.
- Assembles `OTHER_LDFLAGS[sdk=iphoneos*]` and `OTHER_LDFLAGS[sdk=iphonesimulator*]` with `-force_load` entries for each enabled backend xcframework.
- Declares `ExecutorchLib.xcframework` in `vendored_frameworks` but _not_ the backend xcframeworks — backend xcframeworks only live on the linker command line, never in the CocoaPods vendoring list (see next section for why).
- Adds `sqlite3` and the `CoreML` system framework to linkage only when Core ML is enabled.

## Why backends differ between platforms

ExecuTorch registers kernels statically via `__attribute__((constructor))` functions inside each backend's `.a`/`.so`. Two design points fall out of this:

1. **Force-load is required.** Linkers drop unreferenced object files. The registrar symbols have no external users (they run as global constructors at load time), so a plain link keeps the backend library on disk but strips the registration symbols — and the app then fails with `Missing operator: ...` at inference. Every backend library must be force-loaded (`-force_load` on iOS, `--whole-archive` on Android, or `executorch_target_link_options_shared_lib(...)` in ExecuTorch's own CMake helpers).

2. **A single copy of each CPU-kernel registration must exist.** Multiple backend libraries that each whole-archive-link `optimized_native_cpu_ops_lib` cause duplicate kernel-registration aborts (`error 22 EEXIST`) when both get force-loaded into the same process.

On **iOS**, each backend ships as its own static xcframework (`XnnpackBackend.xcframework`, `CoreMLBackend.xcframework`). The podspec force-loads only the ones the user opted into, and `ExecutorchLib.xcframework` itself does not whole-archive the CPU ops — so there is no duplicate registration.

On **Android**, the first iteration tried the same split (separate `libxnnpack_executorch_backend.so`, `libvulkan_executorch_backend.so`) and hit the duplicate-registration abort because the ExecuTorch Android build baked the CPU ops into each backend shared library. Rather than patch the ExecuTorch runtime, backends are now compiled into the single `libexecutorch.so` at ExecuTorch build time — matching the pre-split behavior. The `RNE_ENABLE_XNNPACK`/`RNE_ENABLE_VULKAN` CMake flags on the app side only gate optional C++ code paths; the native library linkage is whatever was baked in at ExecuTorch fork build time.

This asymmetry is the main reason Android has a single `core-android-*` tarball while iOS has separate `core-ios`, `xnnpack-ios`, and `coreml-ios` tarballs.

## Building artifacts from the ExecuTorch fork

Patched sources live in a separate repo (see `executorch/` in the maintainer's machine, typically checked out next to `react-native-executorch/`). The key patch we carry:

- `extension/android/CMakeLists.txt` — add `vulkan_schema` next to `vulkan_backend` in the link libraries list so Vulkan backend builds correctly.

### iOS

From inside `packages/react-native-executorch/third-party/ios/ExecutorchLib/`:

```bash
./build.sh
```

The script drives Xcode to archive the Obj-C++ wrapper for device and simulator, then uses `xcodebuild -create-xcframework` to produce:

- `output/ExecutorchLib.xcframework` — the high-level wrapper + ExecuTorch core + baked-in CPU ops.
- `output/XnnpackBackend.xcframework` — repackaged from `third-party/ios/libs/executorch/libbackend_xnnpack_{ios,simulator}.a`.
- `output/CoreMLBackend.xcframework` — repackaged from `libbackend_coreml_{ios,simulator}.a`.

Producing the underlying `.a` files (executorch + backend static libs for both slices) is a separate step inside the ExecuTorch fork, outside the scope of this script — run the fork's iOS build instructions with XNNPACK and Core ML enabled, then drop the resulting `.a` files into `third-party/ios/libs/executorch/` before invoking `build.sh`.

CocoaPods constraint: inside an xcframework, the library file name must be identical across slices, which is why `build.sh` copies each slice into a temp directory and renames before calling `-create-xcframework`. Do not skip this step.

### Android

Build the ExecuTorch fork with the Android JNI target, enabling the backends you want baked in:

```bash
# from the executorch fork
cmake -S . -B cmake-out-android-arm64 \
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=arm64-v8a \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_VULKAN=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
# ... the full set of flags the fork expects; see the fork's build script
cmake --build cmake-out-android-arm64 -j
```

The resulting `libexecutorch_jni.so` (renamed to `libexecutorch.so` in our layout) goes to `third-party/android/libs/executorch/arm64-v8a/`. Repeat for `x86_64`. The headers copied into `third-party/include/` must match the fork commit that produced the binary — a mismatch shows up as runtime `dlopen`/symbol errors.

### Packaging for a release

For each `<artifact>` tarball:

```bash
tar -czf <artifact>.tar.gz -C <staging-dir> .
sha256sum <artifact>.tar.gz > <artifact>.tar.gz.sha256 # or shasum -a 256
```

Staging-dir layout must mirror the destination (`download-libs.js` extracts with `tar -xzf` into `third-party/android/libs/` or `third-party/ios/` without any path stripping). So `core-android-arm64-v8a.tar.gz` contains a top-level `executorch/arm64-v8a/libexecutorch.so`, `cpuinfo/arm64-v8a/libcpuinfo.a`, etc.

Upload every `<artifact>.tar.gz` **and** its `<artifact>.tar.gz.sha256` as release assets under the `v<version>` tag on GitHub. Publishing the release (out of draft) makes them fetchable anonymously; until then, consumers need `GITHUB_TOKEN` with `repo:read`.

### Iterating locally

Drop built artifacts (plus `.sha256` files) into `packages/react-native-executorch/dist-artifacts/`, then run a static server and point the script at it:

```bash
cd packages/react-native-executorch/dist-artifacts
python3 -m http.server 8080 &
RNET_BASE_URL=http://localhost:8080 yarn install
```

This skips GitHub entirely and re-extracts from the local tarballs — the same checksum verification still runs, so stale caches still get rejected.
6 changes: 6 additions & 0 deletions packages/react-native-executorch/android/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ set(LIBS_DIR "${CMAKE_SOURCE_DIR}/../third-party/android/libs")
set(TOKENIZERS_DIR "${CMAKE_SOURCE_DIR}/../third-party/include/executorch/extension/llm/tokenizers/include")
set(INCLUDE_DIR "${CMAKE_SOURCE_DIR}/../third-party/include")

# Optional feature flags — driven by user config in package.json, passed via gradle cmake arguments
option(RNE_ENABLE_OPENCV "Enable OpenCV-dependent computer vision features" ON)
option(RNE_ENABLE_PHONEMIZER "Enable Phonemizer-dependent TTS features" ON)
option(RNE_ENABLE_XNNPACK "Load the XNNPACK backend shared library" ON)
option(RNE_ENABLE_VULKAN "Load the Vulkan backend shared library" ON)

# Treat third-party headers as system headers to suppress deprecation warnings
include_directories(SYSTEM "${INCLUDE_DIR}")

Expand Down
23 changes: 22 additions & 1 deletion packages/react-native-executorch/android/build.gradle
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
import org.apache.tools.ant.taskdefs.condition.Os

// Read the generated build config written by the postinstall script.
// Falls back to enabling everything if the file doesn't exist (e.g. during CI
// when libs are pre-cached and the postinstall script skipped writing config).
def getRneBuildConfig() {
def configFile = new File("${project.projectDir}/../rne-build-config.json")
if (configFile.exists()) {
try {
return new groovy.json.JsonSlurper().parse(configFile)
} catch (e) {
logger.warn("[RnExecutorch] Failed to parse rne-build-config.json: ${e.message}. Defaulting to all features enabled.")
}
}
return [enableOpencv: true, enablePhonemizer: true, enableXnnpack: true, enableCoreml: true, enableVulkan: true]
}

def rneBuildConfig = getRneBuildConfig()

buildscript {
ext {
agp_version = '8.4.2'
Expand Down Expand Up @@ -122,7 +139,11 @@ android {
"-DREACT_NATIVE_DIR=${toPlatformFileString(reactNativeRootDir.path)}",
"-DBUILD_DIR=${project.buildDir}",
"-DANDROID_TOOLCHAIN=clang",
"-DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON"
"-DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON",
"-DRNE_ENABLE_OPENCV=${rneBuildConfig.enableOpencv ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_PHONEMIZER=${rneBuildConfig.enablePhonemizer ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_XNNPACK=${rneBuildConfig.enableXnnpack ? 'ON' : 'OFF'}",
"-DRNE_ENABLE_VULKAN=${rneBuildConfig.enableVulkan ? 'ON' : 'OFF'}"
}
}
}
Expand Down
Binary file modified packages/react-native-executorch/android/libs/classes.jar
Binary file not shown.
Loading