Recording hooks provide microphone access, audio recording, and VU meter functionality. All recording uses the global shared AudioContext from @waveform-playlist/playout.
Important: AudioContext must be resumed on user interaction via resumeGlobalAudioContext() before recording.
useIntegratedRecording
All-in-one recording hook that combines microphone access, recording, and track management.
Import
import { useIntegratedRecording } from '@waveform-playlist/recording';
import type { IntegratedRecordingOptions } from '@waveform-playlist/recording';
Usage
import { WaveformPlaylistProvider, usePlaylistState, usePlaylistControls } from '@waveform-playlist/browser';
import { useIntegratedRecording } from '@waveform-playlist/recording';
import { useState } from 'react';
function RecordingControls() {
const [tracks, setTracks] = useState([]);
const { selectedTrackId } = usePlaylistState();
const { currentTime } = usePlaylistControls();
const {
// Recording state
isRecording,
duration,
level,
peakLevel,
error,
// Microphone state
hasPermission,
devices,
selectedDevice,
// Controls
requestMicAccess,
startRecording,
stopRecording,
pauseRecording,
resumeRecording,
changeDevice,
} = useIntegratedRecording(tracks, setTracks, selectedTrackId, {
currentTime,
channelCount: 1, // mono
});
if (!hasPermission) {
return <button onClick={requestMicAccess}>Enable Microphone</button>;
}
return (
<div>
<select value={selectedDevice || ''} onChange={(e) => changeDevice(e.target.value)}>
{devices.map((device) => (
<option key={device.deviceId} value={device.deviceId}>
{device.label}
</option>
))}
</select>
<VUMeter level={level} peakLevel={peakLevel} />
{!isRecording ? (
<button onClick={startRecording}>Record</button>
) : (
<>
<button onClick={pauseRecording}>Pause</button>
<button onClick={stopRecording}>Stop</button>
<span>{duration.toFixed(2)}s</span>
</>
)}
{error && <div>Error: {error.message}</div>}
</div>
);
}
<WaveformPlaylistProvider tracks={tracks}>
<RecordingControls />
</WaveformPlaylistProvider>
Parameters
Current playlist tracks array.
setTracks
(tracks: ClipTrack[]) => void
required
Function to update tracks (React state setter).
ID of the currently selected track. Recording will be added to this track.
options
IntegratedRecordingOptions
Recording configuration options.
IntegratedRecordingOptions
Current playback/cursor position in seconds. Recording will start from max(currentTime, lastClipEndTime).
MediaTrackConstraints for audio recording. Overrides the recording-optimized defaults:{
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false,
latency: 0
}
Number of channels to record (1 = mono, 2 = stereo).
Samples per pixel for peak generation.
Return Value
Whether recording is in progress.
Whether recording is paused.
Current recording duration in seconds.
Current RMS level (0-1). Use for VU meter visualization.
Peak level since recording started (0-1).
Error object if recording failed, otherwise null.
Active microphone MediaStream.
Array of available microphone devices.interface MicrophoneDevice {
deviceId: string;
label: string;
groupId: string;
}
Whether microphone permission has been granted.
Device ID of the currently selected microphone.
Start recording. Automatically resumes AudioContext and creates a new clip.
Stop recording and add the recorded clip to the selected track.
Pause recording without stopping.
Request microphone permission and enumerate devices.
changeDevice
(deviceId: string) => Promise<void>
Switch to a different microphone device.
Live peak data for waveform visualization during recording.
useRecording
Low-level recording hook using AudioWorklet for raw audio capture.
Import
import { useRecording } from '@waveform-playlist/recording';
import type { RecordingOptions } from '@waveform-playlist/recording';
Usage
import { useRecording } from '@waveform-playlist/recording';
import { useMicrophoneAccess } from '@waveform-playlist/recording';
function BasicRecorder() {
const { stream, requestAccess } = useMicrophoneAccess();
const { isRecording, duration, peaks, audioBuffer, startRecording, stopRecording } = useRecording(
stream,
{ channelCount: 1, samplesPerPixel: 1024 }
);
const handleStop = async () => {
const buffer = await stopRecording();
if (buffer) {
console.log('Recorded:', buffer.duration, 'seconds');
// Use buffer...
}
};
return (
<div>
{!stream && <button onClick={requestAccess}>Enable Mic</button>}
{stream && !isRecording && <button onClick={startRecording}>Record</button>}
{isRecording && <button onClick={handleStop}>Stop ({duration.toFixed(2)}s)</button>}
</div>
);
}
Parameters
stream
MediaStream | null
required
MediaStream from getUserMedia. Typically obtained from useMicrophoneAccess.
RecordingOptions
Number of channels to record (1 = mono, 2 = stereo).
Samples per pixel for live peak generation.
Return Value
Whether recording is active.
Whether recording is paused.
Current recording duration in seconds.
Live peak data for waveform visualization.
Final AudioBuffer after recording completes. null during recording.
Current RMS level (0-1). Updated during recording.
Peak level since recording started (0-1).
Start recording. Loads AudioWorklet module and begins capturing audio.
stopRecording
() => Promise<AudioBuffer | null>
Stop recording and return the final AudioBuffer.
Error object if recording failed.
useMicrophoneAccess
Manage microphone permissions and device enumeration.
Import
import { useMicrophoneAccess } from '@waveform-playlist/recording';
import type { MicrophoneDevice } from '@waveform-playlist/recording';
Usage
function MicrophoneSelector() {
const { stream, devices, hasPermission, requestAccess, stopStream, error } = useMicrophoneAccess();
if (!hasPermission) {
return <button onClick={() => requestAccess()}>Grant Microphone Access</button>;
}
return (
<div>
<select onChange={(e) => requestAccess(e.target.value)}>
{devices.map((device) => (
<option key={device.deviceId} value={device.deviceId}>
{device.label}
</option>
))}
</select>
<button onClick={stopStream}>Stop</button>
{error && <div>Error: {error.message}</div>}
</div>
);
}
Return Value
Active microphone MediaStream.
Array of available audio input devices.interface MicrophoneDevice {
deviceId: string;
label: string;
groupId: string;
}
Whether microphone permission has been granted.
Whether currently requesting access or enumerating devices.
requestAccess
(deviceId?: string, audioConstraints?: MediaTrackConstraints) => Promise<void>
Request microphone access. Optionally specify device ID and custom constraints.Default constraints:{
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false,
latency: 0
}
Stop the microphone stream and revoke access.
Error object if access failed.
useMicrophoneLevel
Monitor microphone input levels in real-time for VU meters.
Import
import { useMicrophoneLevel } from '@waveform-playlist/recording';
import type { UseMicrophoneLevelOptions } from '@waveform-playlist/recording';
Usage
function VUMeter() {
const { stream } = useMicrophoneAccess();
const { level, peakLevel, resetPeak } = useMicrophoneLevel(stream, {
updateRate: 60, // 60fps
smoothingTimeConstant: 0.8,
});
return (
<div>
<div style={{ width: `${level * 100}%`, height: '20px', background: 'green' }} />
<div style={{ left: `${peakLevel * 100}%`, position: 'absolute' }}>Peak</div>
<button onClick={resetPeak}>Reset Peak</button>
</div>
);
}
Parameters
stream
MediaStream | null
required
MediaStream from getUserMedia.
options
UseMicrophoneLevelOptions
Level monitoring configuration.
UseMicrophoneLevelOptions
How often to update the level in Hz (frames per second).
FFT size for the analyser.
Smoothing time constant (0-1). Higher values = smoother but slower response.
Return Value
Current audio level (0-1). 0 = silence, 1 = maximum level.
Peak level since last reset (0-1).
Reset the peak level to 0.
Recording Architecture
Global AudioContext
All recording uses the shared Tone.js AudioContext from @waveform-playlist/playout:
import { getContext } from 'tone';
const context = getContext(); // Same context as playlist
Critical: Call resumeGlobalAudioContext() on user interaction before recording. Modern browsers block AudioContext until user gesture.
Each recording hook creates its own MediaStreamSource to avoid cross-context errors in Firefox:
// CORRECT - Create source from same context as other nodes
const context = getContext();
const source = context.createMediaStreamSource(stream);
const meter = new Meter({ context });
connect(source, meter);
Creating multiple sources from the same MediaStream is valid - they independently read from the stream.
AudioWorklet Recording
useRecording uses AudioWorklet for low-latency capture:
- Loads
recording-processor.worklet.js on first recording
- AudioWorklet runs in separate thread (no main thread blocking)
- Sends audio chunks via
postMessage
- Accumulates chunks in main thread
- Creates final AudioBuffer on stop
Debugging Note: console.log() in AudioWorklet does NOT appear in browser console. Use postMessage() to send debug data to main thread.
VU Meter Normalization
Tone.js Meter returns dB values. Convert to 0-1 range:
const db = meter.getValue();
const level = Math.max(0, Math.min(1, (db + 100) / 100));
Using -100dB floor for Firefox compatibility (reports lower dB than Chrome).
Recording-Optimized Constraints
Default microphone constraints prioritize raw audio quality:
{
echoCancellation: false, // Disable echo cancellation
noiseSuppression: false, // Disable noise suppression
autoGainControl: false, // Disable auto-gain
latency: 0 // Low latency mode
}
Override via audioConstraints parameter for voice recording or video calls.