Skip to content

Latest commit

 

History

History
27 lines (22 loc) · 2.09 KB

File metadata and controls

27 lines (22 loc) · 2.09 KB

Multilingual-Audio-Text-extraction-openai-whisper

OpenAI's Whisper model, specifically the large-v3 variant, is a powerful transformer-based architecture for automatic speech recognition (ASR) and multilingual language support. OpenAI Whisper model supports transcription in multiple languages and is capable of recognizing and extracting text in 99 languages. The multilingual capabilities allow it to handle audio files in a diverse set of languages, making it a powerful tool for speech-to-text tasks across global languages.

Features :

  • Multilingual Support: Whisper's large-v3 model supports text extraction from audio in various languages, making it suitable for global applications.
  • High Accuracy: The model is trained on vast amounts of diverse audio data, enabling robust performance in noisy, accented, or complex audio environments.
  • Large-Scale Model : With 1.5 billion parameters, this version of Whisper leverages state-of-the-art transformer architecture to deliver precise transcriptions.
  • Versatile Applications: This model can be used for transcription services, voice-to-text applications, content translation, podcast indexing, and more.

Whisper Model Sizes and Parameters

Model Parameters English Multilingual
tiny 39 M
base 74 M
small 244 M
medium 769 M
large 1550 M x
large-v2 1550 M x
large-v3 1550 M x

Source : Click here

Why Use Whisper large-v3 ?

  • Multilingual Speech Recognition: Extract text from audio in multiple languages without needing separate models for each language.
  • Pretrained and Ready to Use: Leverage OpenAI's pre-trained model for immediate use in various ASR tasks.