Browse and submit evaluation results for AI benchmarks
Monitor application health
Uncensored General Intelligence Leaderboard
Explore speech recognition model performance
Make RAG evaluation dataset. 100% compatible to AutoRAG
Migrate datasets from GitHub or Kaggle to Hugging Face Hub
Label data for machine learning models
Browse and explore datasets from Hugging Face
Profile a dataset and publish the report on Hugging Face
Display a treemap of languages and datasets
Open Agent Leaderboard
More advanced and challenging multi-task evaluation
Display server status information
Leaderboard is a comprehensive data visualization tool designed to help users browse and submit evaluation results for AI benchmarks. It serves as a platform for researchers and developers to compare and analyze performance metrics of various AI models, enabling informed decision-making and fostering innovation.
What types of AI models can I find on Leaderboard?
Leaderboard supports a wide range of AI models, including but not limited to natural language processing, computer vision, and reinforcement learning models.
Can I filter results by specific datasets?
Yes, Leaderboard allows users to filter results by dataset, enabling more targeted comparisons and analyses.
How often is the Leaderboard updated?
The Leaderboard is updated in real-time as new benchmark results are submitted and verified.