Overview
The MicTranscriber class is a helper class that automatically captures audio from the system microphone and feeds it to a transcriber. It handles all the platform-specific audio capture code for iOS and macOS.
Initialization
init(modelPath:modelArch:updateInterval:sampleRate:channels:bufferSize:options:)
Initialize a MicTranscriber.
public init(
modelPath: String,
modelArch: ModelArch = .tiny,
updateInterval: TimeInterval = 0.5,
sampleRate: Double = 16000,
channels: Int = 1,
bufferSize: AVAudioFrameCount = 1024,
options: [TranscriberOption]? = nil
) throws
Path to the directory containing model files
Model architecture to use. Options: .tiny, .base, .tinyStreaming, .baseStreaming, .smallStreaming, .mediumStreaming
updateInterval
TimeInterval
default:"0.5"
Interval in seconds between automatic transcription updates
Number of audio channels (currently only mono is supported for transcription)
bufferSize
AVAudioFrameCount
default:"1024"
Buffer size in frames for audio capture
Optional transcriber options for advanced configuration
Throws: MoonshineError if the transcriber cannot be loaded
Example:
import MoonshineVoice
do {
let micTranscriber = try MicTranscriber(
modelPath: "/path/to/models",
modelArch: .tiny,
updateInterval: 0.5
)
} catch {
print("Failed to initialize MicTranscriber: \(error)")
}
Audio Capture Control
start()
Start listening to the microphone and begin transcription.
public func start() throws
Throws: MoonshineError or AVAudioSessionError if starting fails
Note: This method will request microphone permission if it hasn’t been granted yet. On iOS/tvOS/watchOS, it uses AVAudioSession. On macOS, it uses AVCaptureDevice.
Example:
do {
try micTranscriber.start()
print("Listening to microphone...")
} catch {
print("Failed to start: \(error)")
}
stop()
Stop listening to the microphone and stop transcription.
public func stop() throws
Throws: MoonshineError if stopping fails
Example:
do {
try micTranscriber.stop()
print("Stopped listening")
} catch {
print("Failed to stop: \(error)")
}
close()
Close the transcriber and free resources.
Note: This method automatically calls stop() if the transcriber is currently listening.
Event Listeners
addListener(_:)
Add an event listener to receive transcription events. Supports both closure-based and protocol-based listeners.
public func addListener(_ listener: @escaping (TranscriptEvent) throws -> Void)
public func addListener(_ listener: TranscriptEventListener)
Example with closure:
micTranscriber.addListener { event in
if let lineCompleted = event as? LineCompleted {
print("\(lineCompleted.line.text)")
}
}
Example with protocol:
class MyListener: TranscriptEventListener {
func onLineTextChanged(_ event: LineTextChanged) {
print("\(event.line.text)")
}
func onLineCompleted(_ event: LineCompleted) {
print("Final: \(event.line.text)")
}
}
let listener = MyListener()
micTranscriber.addListener(listener)
removeListener(_:)
Remove an event listener.
public func removeListener(_ listener: @escaping (TranscriptEvent) throws -> Void)
public func removeListener(_ listener: TranscriptEventListener)
removeAllListeners()
Remove all event listeners.
public func removeAllListeners()
Complete Example
import MoonshineVoice
class TranscriptionDelegate: TranscriptEventListener {
func onLineStarted(_ event: LineStarted) {
print("[Started] \(event.line.text)")
}
func onLineTextChanged(_ event: LineTextChanged) {
// Update UI with interim results
print("[Interim] \(event.line.text)")
}
func onLineCompleted(_ event: LineCompleted) {
// Display final result
print("[Final] \(event.line.text)")
}
func onError(_ event: TranscriptError) {
print("[Error] \(event.error)")
}
}
do {
// Initialize MicTranscriber
let micTranscriber = try MicTranscriber(
modelPath: Bundle.main.path(forResource: "models", ofType: nil)!,
modelArch: .tiny,
updateInterval: 0.5
)
// Add event listener
let delegate = TranscriptionDelegate()
micTranscriber.addListener(delegate)
// Start listening
try micTranscriber.start()
// ... let it run for a while ...
// Stop listening
try micTranscriber.stop()
// Clean up
micTranscriber.close()
} catch {
print("Error: \(error)")
}
iOS/tvOS/watchOS
- Uses
AVAudioSession for microphone access
- Requires microphone permission in Info.plist:
NSMicrophoneUsageDescription
- Audio session is automatically configured with
.record category
- Audio session is deactivated when
stop() is called
macOS
- Uses
AVCaptureDevice for permission checking
- Requires microphone permission in Info.plist:
NSMicrophoneUsageDescription
- Uses
AVAudioEngine for audio capture
Microphone Permissions
Before using MicTranscriber, ensure you’ve added the appropriate permission description to your Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access for speech transcription</string>
The start() method will automatically request permission if it hasn’t been granted. If permission is denied, it will throw a MoonshineError.