View and submit LLM benchmark evaluations
Browse and submit LLM evaluations
Teach, test, evaluate language models with MTEB Arena
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Display LLM benchmark leaderboard and info
Text-To-Speech (TTS) Evaluation using objective metrics.
Calculate memory usage for LLM models
Push a ML model to Hugging Face Hub
Display and filter leaderboard models
Browse and submit evaluations for CaselawQA benchmarks
Create and upload a Hugging Face model card
Download a TriplaneGaussian model checkpoint
Create demo spaces for models on Hugging Face
The Russian LLM Leaderboard is a benchmarking platform designed to evaluate and compare large language models (LLMs) specifically for the Russian language. It provides a comprehensive overview of model performance, enabling users to view and submit evaluations for various LLMs. This tool serves as a valuable resource for researchers, developers, and enthusiasts interested in understanding the capabilities of Russian language models.
What types of models are included in the Russian LLM Leaderboard?
The leaderboard includes a wide range of LLMs, from open-source models to commercial offerings, as long as they support the Russian language and have been benchmarked according to the platform's criteria.
How can I submit my own LLM for evaluation?
To submit your model, navigate to the submission section of the leaderboard and follow the provided guidelines. Ensure your model meets the specified requirements for benchmarking on Russian language tasks.
What factors influence the benchmarking scores?
Scores are influenced by performance on tasks such as text generation, question-answering, translation, and other linguistic benchmarks. The specific datasets and evaluation metrics used are detailed on the platform.