Companion leaderboard for the SLM survey paper
Transfer GitHub repositories to Hugging Face Spaces
Check your progress in a Deep RL course
Analyze weekly and daily trader performance in Olas Predict
Generate a data profile report
Multilingual metrics for the LMSys Arena Leaderboard
Open Agent Leaderboard
Display CLIP benchmark results for inference performance
Explore token probability distributions with sliders
Mapping Nieman Lab's 2025 Journalism Predictions
Generate plots for GP and PFN posterior approximations
Browse LLM benchmark results in various categories
Generate detailed data profile reports
SLM Leaderboard is a companion leaderboard for the SLM survey paper, designed to display and organize benchmark results from various models evaluated in the paper. It serves as a centralized platform for researchers and developers to track performance metrics and compare SLM (Specialized Language Models) across different tasks and datasets.
What does SLM stand for?
SLM stands for Specialized Language Models, which are language models tailored for specific tasks or domains.
Can I customize the leaderboard's appearance?
Yes, you can filter and sort the data to view specific models or tasks. However, the leaderboard itself is a static markdown table and cannot be altered in real-time.
How often is the leaderboard updated?
The leaderboard is updated periodically as new models or benchmark results are published in the SLM survey paper.