Pre-recorded Transcription
Available Now
Transcribe Audio Files
Upload audio files and receive accurate transcripts, timestamps, speaker labels and subtitle files. Optimized for African languages and production workloads.
What you'll get
Automatic language detection
Our service automatically detects the spoken language(s) in your audio so you donβt need to manually set the locale for many files.
- Detects mixed-language audio segments
- Confidence scores per segment
- Option to override detection if needed
Speaker diarization
Identify and label different speakers automatically for clearer transcripts and downstream analytics.
- Per-word speaker tags when available
- Segment-level speaker timestamps
- Post-processing to merge short speaker bursts
SRT & subtitle generation
Export subtitles in SRT or VTT formats suitable for video players and editors.
- Customizable subtitle length and max characters per line
- Word-level timing for accurate sync
- Support for multiple output encodings
Word-level timestamps
Get exact timestamps per word for alignment, search, and subtitle accuracy.
- Millisecond-level timing
- Align transcripts to media players or editors
- Useful for search, redaction, and analytics
Quick Example
Upload a file and request detailed output including SRT and diarization.
Tip
If you already use one of our SDKs you can call the same endpoints β examples for Python and JavaScript are available below in the SDK sections.
Python (sync)
main.py
python
import orbitalsaiclient = orbitalsai.Client(api_key="your_api_key_here")# Transcribe with automatic waitingtranscript = client.transcribe("audio.mp3", model_name="Perigee-1")print(transcript.text)print(transcript.audio_duration) # Duration in secondsJavaScript / Node.js
quick-start.js
javascript
import { OrbitalsClient } from "orbitalsai";// Initialize the client with your API keyconst client = new OrbitalsClient({ apiKey: "your-api-key-here",});// Upload and transcribe audioconst file = /* your audio file */;const upload = await client.audio.upload(file);// Wait for transcription to completeconst result = await client.audio.waitForCompletion(upload.task_id);console.log("Transcription:", result.result_text);Output Options
| Option | Type | Default | Description |
|---|---|---|---|
language | string | auto | Auto-detect spoken language |
features | array | ["transcript"] | Enable diarization, srt, word_timestamps, etc. |
subtitle_format | string | srt | Choose between srt and vtt |
SDKs & Next Steps
The easiest way to integrate is via our SDKs β documentation and examples are available for Python and JavaScript.