Browse and submit evaluation results for AI benchmarks
Make RAG evaluation dataset. 100% compatible to AutoRAG
Browse and explore datasets from Hugging Face
https://huggingface.co/spaces/VIDraft/mouse-webgen
Explore income data with an interactive visualization tool
Compare classifier performance on datasets
Generate detailed data reports
Open Agent Leaderboard
Generate a data profile report
Calculate and explore ecological data with ECOLOGITS
Cluster data points using KMeans
Browse and filter LLM benchmark results
Analyze autism data and generate detailed reports
Leaderboard is a comprehensive data visualization tool designed to help users browse and submit evaluation results for AI benchmarks. It serves as a platform for researchers and developers to compare and analyze performance metrics of various AI models, enabling informed decision-making and fostering innovation.
What types of AI models can I find on Leaderboard?
Leaderboard supports a wide range of AI models, including but not limited to natural language processing, computer vision, and reinforcement learning models.
Can I filter results by specific datasets?
Yes, Leaderboard allows users to filter results by dataset, enabling more targeted comparisons and analyses.
How often is the Leaderboard updated?
The Leaderboard is updated in real-time as new benchmark results are submitted and verified.