Documentation Index
Fetch the complete documentation index at: https://mintlify.com/xdcobra/react-native-sherpa-onnx/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Android setup for React Native Sherpa-ONNX is handled automatically via Gradle. The library manages native dependencies, downloads prebuilt binaries, and configures execution providers without requiring manual intervention.Requirements
- Android API 24+ (Android 7.0+)
- Gradle 8.7.2+
- Kotlin 2.0.21+
- NDK (automatically configured)
- CMake 3.22.1+
Installation
No additional setup is required beyond the standard npm installation:- Native dependency resolution via Gradle
- Prebuilt binary downloads from Maven or GitHub releases
- JNI library configuration
- CMake native build setup
Gradle Configuration
SDK Versions
The library uses the following default SDK versions (configurable via root project properties):Supported ABIs
All standard Android ABIs are supported:arm64-v8a(primary, 64-bit ARM)armeabi-v7a(32-bit ARM)x86(32-bit x86 emulators)x86_64(64-bit x86 emulators)
Native Build
The Android module uses CMake for native code compilation with:- C++17 standard
- c++_shared STL
- JNI bridge for sherpa-onnx C++ API
Prebuilt Dependencies
The library automatically downloads and integrates prebuilt native libraries:| Component | Default Version | Purpose |
|---|---|---|
| sherpa-onnx | 1.12.24 | Core ONNX Runtime and speech processing |
| onnxruntime | 1.24.2-qnn2.43.1.260218 | ONNX Runtime with QNN support |
| FFmpeg | 8.0.1 | Audio format conversion |
| libarchive | 3.8.5 | Archive extraction (.tar.bz2, etc.) |
third_party/*/ANDROID_RELEASE_TAG files and can be overridden via environment variables:
Execution Providers
Android supports multiple execution providers for hardware acceleration:CPU (Default)
Always available. No configuration required.NNAPI (Android Neural Networks API)
Hardware acceleration via GPU/DSP/NPU. Uses the Android Neural Networks API for device-specific acceleration.providerCompiled: Whether NNAPI is built into ONNX RuntimehasAccelerator: Whether the device reports a dedicated accelerator (GPU/DSP/NPU)canInit: Whether an ONNX session can be created with NNAPI
hasAccelerator can be false while canInit is true — NNAPI will run on CPU in this case.XNNPACK (CPU-Optimized)
Optimized CPU execution. XNNPACK provides faster CPU inference than the default CPU provider.QNN (Qualcomm NPU)
Qualcomm Neural Processing Unit acceleration. QNN provides the fastest inference on Qualcomm Snapdragon devices with NPU support.Adding QNN Runtime Libraries
-
Download the Qualcomm AI Runtime:
- Visit: Qualcomm AI Runtime Community
- Accept the license agreement
- Download the QNN SDK
-
Copy required libraries to your app:
Copy these files to
android/app/src/main/jniLibs/<ABI>/:libQnnHtp.solibQnnHtpV*Stub.so(all versions: V68, V69, V73, V75, V79, V81)libQnnHtpV*Skel.so(all versions)libQnnHtpPrepare.solibQnnCpu.solibQnnSystem.so
arm64-v8a: -
Check QNN support:
providerCompiled: Whether QNN is built into ONNX Runtime (alwaystruein this SDK)hasAccelerator: Whether native QNN HTP backend initializes (QnnBackend_createsucceeds)canInit: Whether an ONNX session can be created with the QNN execution provider
License Compliance
The Qualcomm AI Stack License permits distribution of QNN runtime libraries only as part of your application (not standalone). When including QNN libraries:- Do not remove Qualcomm copyright or proprietary notices
- Include the applicable Qualcomm license in your app’s legal/credits section
- Distribute libraries only in object code form, bundled with your app
third_party/onnxruntime_prebuilt/license/license.txt for details.
Checking Available Providers
Query all available execution providers at runtime:Optional: Disabling FFmpeg or libarchive
To reduce APK size or avoid conflicts with other libraries, you can disable FFmpeg or libarchive:Disable FFmpeg
Add toandroid/gradle.properties:
Disable libarchive
Add toandroid/gradle.properties:
ProGuard / R8
The library includes ProGuard rules (proguard-rules.pro) to preserve JNI-called classes and methods. No additional ProGuard configuration is required.
Troubleshooting
Native library not found
If you see errors about missing.so files:
-
Clean and rebuild:
-
Check jniLibs directory: Ensure
android/src/main/jniLibs/<ABI>/contains:libsherpa-onnx-jni.solibonnxruntime.so- (Optional) QNN libraries if using QNN provider
CMake configuration failed
Ensure you have CMake 3.22.1+ installed via Android SDK Manager:QNN not working
IfgetQnnSupport().canInit returns false:
- Verify QNN runtime libraries are in
jniLibs/<ABI>/ - Ensure your device has a Qualcomm Snapdragon SoC with NPU support
- Check logcat for QNN initialization errors:
Related Documentation
- Play Asset Delivery (PAD) - For large model distribution
- Execution Providers - Detailed provider documentation
- Model Setup - Asset bundling and model paths