Back to Skills Hub
Faster Whisper

Faster Whisper

@theplasmak
developmentSpeech-to-TextLocal TranscriptionGPU Acceleration

Local speech-to-text using faster-whisper — a CTranslate2 reimplementation of OpenAI's Whisper that runs 4-6x faster with identical accuracy. With GPU acceleration, achieve ~20x realtime transcription speed. Supports 99+ languages with auto-detection, word-level timestamps for subtitles, and works c

🚀 Faster Whisper converts speech to text locally on your computer—4-6x faster than standard Whisper with identical accuracy. Perfect for transcribing meetings, podcasts, interviews, and videos without cloud costs or internet dependency.

💡 Use this when you need to transcribe audio files, generate subtitles with word-level timestamps, or process multiple files in batch. Supports 99+ languages with automatic detection, making it ideal for multilingual content.

✨ Run transcriptions offline after downloading the model once. With GPU acceleration, convert a 10-minute audio file in just 30 seconds—no API fees, complete privacy, and full control over your data.

GitHub

Requirements

faster-whisper

CTranslate2-based Whisper implementation for fast speech-to-text

CTranslate2

Fast inference engine for neural machine translation and speech models