A benchmark for open-source multi-dialect Arabic ASR models
Track, rank and evaluate open Arabic LLMs and chatbots
Choose to summarize text or answer questions from context
This is for learning purpose, don't take it seriously :)
"One-minute creation by AI Coding Autonomous Agent MOUSE"
Calculate patentability score from application
Check text for moderation flags
Provide feedback on text content
Retrieve news articles based on a query
Classify patent abstracts into subsectors
Semantically Search Analytics Vidhya free Courses
Identify named entities in text
Explore and interact with HuggingFace LLM APIs using Swagger UI
The Open Universal Arabic ASR Leaderboard is a benchmark platform designed to evaluate and compare open-source Arabic Automatic Speech Recognition (ASR) models. It focuses on multi-dialect Arabic speech recognition, providing a unified space for researchers and developers to assess model performance across diverse dialects and scenarios. The leaderboard ensures transparency and consistency in model evaluation, fostering innovation and collaboration in the field of Arabic speech recognition.
• Multi-dialect support: Evaluates ASR models across various Arabic dialects, including Modern Standard Arabic (MSA) and colloquial dialects. • Open-source focus: Promotes the use of open-source models to encourage community-driven improvements. • Interactive web interface: Allows users to explore benchmark results, compare models, and visualize performance metrics. • Request functionality: Users can submit requests for benchmarking new or custom Arabic ASR models. • Comprehensive metrics: Provides detailed performance metrics, such as Word Error Rate (WER), Character Error Rate (CER), and accuracy scores. • Model comparison tools: Enables side-by-side comparison of different models based on dialect, accuracy, and performance.
1. What dialects are supported on the leaderboard?
The Open Universal Arabic ASR Leaderboard supports a wide range of Arabic dialects, including Modern Standard Arabic (MSA), Egyptian, Levantine, Gulf, Iraqi, and Moroccan dialects.
2. How often are the benchmarks updated?
Benchmarks are updated periodically as new models are submitted or existing models are re-evaluated. Updates are typically announced on the platform’s news section or social media channels.
3. Can I contribute my own ASR model to the leaderboard?
Yes, the leaderboard encourages contributions from the community. You can submit your model for benchmarking by following the submission guidelines provided on the platform.