Skip to main content
This example demonstrates LFM2.5-Audio-1.5B running entirely in your browser using WebGPU and ONNX Runtime Web. You can find all the code in this Hugging Face Space, including a deployed version you can interact with 0 setup here.

Quick start

1

Clone the repository

git clone https://huggingface.co/spaces/LiquidAI/LFM2.5-Audio-1.5B-transformers-js/
cd LFM2.5-Audio-1.5B-transformers-js
2

Verify npm is installed

npm --version
If you don’t have npm, install Node.js and npm.
3

Install dependencies

npm install
4

Start the development server

npm run dev
The dev server will start and provide you with a local URL (typically http://localhost:5173) where you can access the app in your browser.

Features

  • ASR (Speech Recognition): Transcribe audio to text
  • TTS (Text-to-Speech): Convert text to natural speech
  • Interleaved: Mixed audio and text conversation

Requirements

  • A browser with WebGPU support (Chrome/Edge 113+)
  • Enable WebGPU at chrome://flags/#enable-unsafe-webgpu if needed

Model

Uses quantized ONNX models from LiquidAI/LFM2.5-Audio-1.5B-ONNX.

License

Model weights are released under the LFM 1.0 License.

Source code

View the complete source code on Hugging Face.

Build docs developers (and LLMs) love