Pre-recorded Transcription
Available Now

Transcribe Audio Files

Upload audio files and receive accurate transcripts, timestamps, speaker labels and subtitle files. Optimized for African languages and production workloads.

What you'll get

Automatic language detection

Our service automatically detects the spoken language(s) in your audio so you don’t need to manually set the locale for many files.

  • Detects mixed-language audio segments
  • Confidence scores per segment
  • Option to override detection if needed

Speaker diarization

Identify and label different speakers automatically for clearer transcripts and downstream analytics.

  • Per-word speaker tags when available
  • Segment-level speaker timestamps
  • Post-processing to merge short speaker bursts

SRT & subtitle generation

Export subtitles in SRT or VTT formats suitable for video players and editors.

  • Customizable subtitle length and max characters per line
  • Word-level timing for accurate sync
  • Support for multiple output encodings

Word-level timestamps

Get exact timestamps per word for alignment, search, and subtitle accuracy.

  • Millisecond-level timing
  • Align transcripts to media players or editors
  • Useful for search, redaction, and analytics

Quick Example

Upload a file and request detailed output including SRT and diarization.

Tip
If you already use one of our SDKs you can call the same endpoints β€” examples for Python and JavaScript are available below in the SDK sections.

Python (sync)

main.py
python
import orbitalsai
client = orbitalsai.Client(api_key="your_api_key_here")
# Transcribe with automatic waiting
transcript = client.transcribe("audio.mp3", model_name="Perigee-1")
print(transcript.text)
print(transcript.audio_duration) # Duration in seconds

JavaScript / Node.js

quick-start.js
javascript
import { OrbitalsClient } from "orbitalsai";
// Initialize the client with your API key
const client = new OrbitalsClient({
apiKey: "your-api-key-here",
});
// Upload and transcribe audio
const file = /* your audio file */;
const upload = await client.audio.upload(file);
// Wait for transcription to complete
const result = await client.audio.waitForCompletion(upload.task_id);
console.log("Transcription:", result.result_text);

Output Options

OptionTypeDefaultDescription
languagestringautoAuto-detect spoken language
featuresarray["transcript"]Enable diarization, srt, word_timestamps, etc.
subtitle_formatstringsrtChoose between srt and vtt

SDKs & Next Steps

The easiest way to integrate is via our SDKs β€” documentation and examples are available for Python and JavaScript.