Request evaluation of a speech recognition model
Better AI powered platform to purify your speech signal
Whisper model to transcript japanese audio to katakana.
Generate speech from text with adjustable rate and pitch
Transcribe or translate audio and YouTube videos
Convert text to speech with different voices
Generate customized audio from text using a voice sample
Generate natural-sounding speech from text using a voice you choose
Moonshine ASR models running on-device, in your web browser.
WebGPU text-to-Speech powered by OuteTTS and Transformers.js
Talk to Qwen2Audio with Gradio and WebRTC ⚡️
Accessibility PDF & pasted text to speech converter w/ gTTs
CPU powered, low RTF, emotional, multilingual TTS
Open ASR Leaderboard is a platform designed to evaluate and benchmark speech recognition models. It provides a centralized location for developers and researchers to assess the performance of their automatic speech recognition (ASR) systems against established standards and compare them with other models.
• Comprehensive evaluation metrics: The leaderboard provides detailed performance metrics, including word error rate (WER), character error rate (CER), and real-time factor (RTF).
• Multi-language support: It supports evaluation across multiple languages and accents, making it a versatile tool for diverse datasets.
• Benchmark datasets: Access to standardized test datasets for consistent and fair model comparison.
• Customizable evaluation: Users can define specific test scenarios or use predefined configurations.
• Visualization tools: Results are presented in interactive charts and tables for easy analysis.
• Community collaboration: A forum for sharing insights, best practices, and model improvements.
What types of speech recognition models can I evaluate?
You can evaluate any automatic speech recognition model, including deep learning-based models, traditional HMM-based systems, or hybrid approaches.
How often are the leaderboards updated?
The leaderboards are updated regularly as new models are submitted and evaluated. Updates are typically announced in the community forum.
Can I use custom datasets for evaluation?
Yes, you can upload custom test datasets for evaluation, provided they meet the platform's formatting requirements.