Open ASR Leaderboard
Request evaluation of a speech recognition model
You May Also Like
View AllWhisper
Transcribe audio from microphone, file, or YouTube link
tts Text To Speech
Convert text to speech with Next-gen Kaldi
FunASR
Convert speech to text from audio files
Whisper Turbo
Transcribe or translate audio and YouTube videos
Text-to-Speech WebGPU
WebGPU text-to-Speech powered by OuteTTS and Transformers.js
Persian Speech Transcription
Transcribe Persian audio to text
Text To Video
Generate audio and SRT subtitles from text
F5-TTS-Vietnamese
Generate Vietnamese speech from text and reference audio
Whisper WebGPU
Convert spoken words to text
WebAssembly English TTS (sherpa-onnx)
Generate speech from text with adjustable speed
Audio Arena
audio-arena
Talk To Claude
Converse with Claude Play.ai and WebRTC ā”ļø
What is Open ASR Leaderboard ?
Open ASR Leaderboard is a platform designed to evaluate and benchmark speech recognition models. It provides a centralized location for developers and researchers to assess the performance of their automatic speech recognition (ASR) systems against established standards and compare them with other models.
Features
⢠Comprehensive evaluation metrics: The leaderboard provides detailed performance metrics, including word error rate (WER), character error rate (CER), and real-time factor (RTF).
⢠Multi-language support: It supports evaluation across multiple languages and accents, making it a versatile tool for diverse datasets.
⢠Benchmark datasets: Access to standardized test datasets for consistent and fair model comparison.
⢠Customizable evaluation: Users can define specific test scenarios or use predefined configurations.
⢠Visualization tools: Results are presented in interactive charts and tables for easy analysis.
⢠Community collaboration: A forum for sharing insights, best practices, and model improvements.
How to use Open ASR Leaderboard ?
- Prepare your model: Ensure your ASR model is trained and ready for evaluation.
- Submit your model: Upload your model to the Open ASR Leaderboard platform following the submission guidelines.
- Run the evaluation: The platform will automatically evaluate your model against the benchmark datasets.
- Review results: Analyze the performance metrics and compare your model with others on the leaderboard.
- Refine and resubmit: Based on the results, refine your model and resubmit for improved performance.
Frequently Asked Questions
What types of speech recognition models can I evaluate?
You can evaluate any automatic speech recognition model, including deep learning-based models, traditional HMM-based systems, or hybrid approaches.
How often are the leaderboards updated?
The leaderboards are updated regularly as new models are submitted and evaluated. Updates are typically announced in the community forum.
Can I use custom datasets for evaluation?
Yes, you can upload custom test datasets for evaluation, provided they meet the platform's formatting requirements.