Architecture
Llama.cpp is used for both models’ inference, with a custom runner for the audio model. The car cockpit (UI) is vanilla js+html+css, and the communication with the backend is through messages over websocket, like a widely simplified car CAN bus.Quick start
Supported platformsThe following platforms are currently supported:
- macos-arm64
- ubuntu-arm64
- ubuntu-x64
- ubuntu-WSL2
Optional: Symlink llama-server
If llama-server is already in your PATH, symlink it instead of building:
When building for ROCm, also install:
sudo apt install -y libstdc++-14-devBuilding llama-server from sourceThe
Then re-run
make -j2 audioserver serve step will build llama-server automatically if it is not already present. This requires cmake and a C++ toolchain. If the build fails, install the missing dependencies first:| Platform | Command |
|---|---|
| macOS | brew install cmake (Xcode CLT required: xcode-select --install) |
| Linux / WSL2 | make install-deps |
make -j2 audioserver serve.