Browse and submit evaluation results for AI benchmarks
This project is a GUI for the gpustack/gguf-parser-go
Search and save datasets generated with a LLM in real time
More advanced and challenging multi-task evaluation
Generate detailed data reports
Transfer GitHub repositories to Hugging Face Spaces
Generate a data report using the pandas-profiling tool
Submit evaluations for speaker tagging and view leaderboard
Generate images based on data
Analyze and visualize your dataset using AI
Browse and filter AI model evaluation results
Analyze your dataset with guided tools
Open Agent Leaderboard
Leaderboard is a comprehensive data visualization tool designed to help users browse and submit evaluation results for AI benchmarks. It serves as a platform for researchers and developers to compare and analyze performance metrics of various AI models, enabling informed decision-making and fostering innovation.
What types of AI models can I find on Leaderboard?
Leaderboard supports a wide range of AI models, including but not limited to natural language processing, computer vision, and reinforcement learning models.
Can I filter results by specific datasets?
Yes, Leaderboard allows users to filter results by dataset, enabling more targeted comparisons and analyses.
How often is the Leaderboard updated?
The Leaderboard is updated in real-time as new benchmark results are submitted and verified.