Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/XDcobra/react-native-sherpa-onnx/llms.txt

Use this file to discover all available pages before exploring further.

This module provides core functionality for testing library initialization and detecting hardware acceleration support on different platforms.

testSherpaInit()

Test method to verify that the sherpa-onnx native library is loaded correctly.
function testSherpaInit(): Promise<string>

Returns

Promise<string>
string
Resolves with a test message confirming the library is loaded.

Example

import { testSherpaInit } from 'react-native-sherpa-onnx';

const result = await testSherpaInit();
console.log(result); // "Sherpa ONNX initialized successfully"

getQnnSupport()

Check Qualcomm QNN (Qualcomm Neural Network) acceleration support on Android devices.
function getQnnSupport(modelBase64?: string): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded model for session initialization test. If omitted, uses an embedded test model.

Returns

AccelerationSupport
object

Platform Support

  • Android: Full support on Qualcomm devices
  • iOS: Returns all false

Example

import { getQnnSupport } from 'react-native-sherpa-onnx';

const qnnSupport = await getQnnSupport();
if (qnnSupport.canInit) {
  console.log('QNN acceleration available');
  // Use provider: 'qnn' in model options
}

getNnapiSupport()

Check NNAPI (Android Neural Networks API) acceleration support on Android devices.
function getNnapiSupport(modelBase64?: string): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded model for session initialization test.

Returns

AccelerationSupport
object
See getQnnSupport() for structure.

Platform Support

  • Android: Support varies by device and Android version
  • iOS: Returns all false

Example

import { getNnapiSupport } from 'react-native-sherpa-onnx';

const nnapiSupport = await getNnapiSupport();
if (nnapiSupport.canInit) {
  // Use provider: 'nnapi' in model options
}

getXnnpackSupport()

Check XNNPACK (CPU-optimized) acceleration support.
function getXnnpackSupport(modelBase64?: string): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded model for session initialization test.

Returns

AccelerationSupport
object

Platform Support

  • Android: Full support (CPU-optimized inference)
  • iOS: Returns all false

Example

import { getXnnpackSupport } from 'react-native-sherpa-onnx';

const xnnpackSupport = await getXnnpackSupport();
if (xnnpackSupport.canInit) {
  // Use provider: 'xnnpack' in model options
}

getCoreMlSupport()

Check Core ML acceleration support on iOS devices with Apple Neural Engine.
function getCoreMlSupport(modelBase64?: string): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded model for session initialization test.

Returns

AccelerationSupport
object

Platform Support

  • iOS: Full support on iOS 11+ with Apple Neural Engine
  • Android: Returns all false

Example

import { getCoreMlSupport } from 'react-native-sherpa-onnx';

const coreMLSupport = await getCoreMlSupport();
if (coreMLSupport.hasAccelerator) {
  console.log('Apple Neural Engine available');
  // Use provider: 'coreml' in model options
}

getAvailableProviders()

Get the list of available ONNX Runtime execution providers on the current device.
function getAvailableProviders(): Promise<string[]>

Returns

Promise<string[]>
string[]
Array of provider names (e.g., ["CPU", "NNAPI", "QNN", "XNNPACK"]).

Platform Support

Requires the ONNX Runtime Java bridge from the onnxruntime AAR.

Example

import { getAvailableProviders } from 'react-native-sherpa-onnx';

const providers = await getAvailableProviders();
console.log('Available providers:', providers);
// ["CPU", "XNNPACK", "QNN"] on Qualcomm Android
// ["CPU", "CoreML"] on iOS with Neural Engine

Types

AccelerationSupport

Result type for hardware acceleration queries.
interface AccelerationSupport {
  /** Whether the provider is compiled into the library */
  providerCompiled: boolean;
  
  /** Whether compatible hardware accelerator is available */
  hasAccelerator: boolean;
  
  /** Whether a model session can be initialized with this provider */
  canInit: boolean;
}

See Also

Build docs developers (and LLMs) love