Companion leaderboard for the SLM survey paper
Visualize dataset distributions with facets
Browse and filter LLM benchmark results
Check system health
Open Agent Leaderboard
Embed and use ZeroEval for evaluation tasks
Analyze and visualize Hugging Face model download stats
World warming land sites
View monthly arXiv download trends since 1994
Browse and filter AI model evaluation results
Display CLIP benchmark results for inference performance
Generate synthetic dataset files (JSON Lines)
Explore token probability distributions with sliders
SLM Leaderboard is a companion leaderboard for the SLM survey paper, designed to display and organize benchmark results from various models evaluated in the paper. It serves as a centralized platform for researchers and developers to track performance metrics and compare SLM (Specialized Language Models) across different tasks and datasets.
What does SLM stand for?
SLM stands for Specialized Language Models, which are language models tailored for specific tasks or domains.
Can I customize the leaderboard's appearance?
Yes, you can filter and sort the data to view specific models or tasks. However, the leaderboard itself is a static markdown table and cannot be altered in real-time.
How often is the leaderboard updated?
The leaderboard is updated periodically as new models or benchmark results are published in the SLM survey paper.