View and submit LLM benchmark evaluations
Open Persian LLM Leaderboard
Multilingual Text Embedding Model Pruner
Create and manage ML pipelines with ZenML Dashboard
Calculate survival probability based on passenger details
Evaluate and submit AI model results for Frugal AI Challenge
View and submit LLM benchmark evaluations
GIFT-Eval: A Benchmark for General Time Series Forecasting
SolidityBench Leaderboard
Evaluate open LLMs in the languages of LATAM and Spain.
Generate leaderboard comparing DNA models
Determine GPU requirements for large language models
Evaluate code generation with diverse feedback types
The Russian LLM Leaderboard is a benchmarking platform designed to evaluate and compare large language models (LLMs) specifically for the Russian language. It provides a comprehensive overview of model performance, enabling users to view and submit evaluations for various LLMs. This tool serves as a valuable resource for researchers, developers, and enthusiasts interested in understanding the capabilities of Russian language models.
What types of models are included in the Russian LLM Leaderboard?
The leaderboard includes a wide range of LLMs, from open-source models to commercial offerings, as long as they support the Russian language and have been benchmarked according to the platform's criteria.
How can I submit my own LLM for evaluation?
To submit your model, navigate to the submission section of the leaderboard and follow the provided guidelines. Ensure your model meets the specified requirements for benchmarking on Russian language tasks.
What factors influence the benchmarking scores?
Scores are influenced by performance on tasks such as text generation, question-answering, translation, and other linguistic benchmarks. The specific datasets and evaluation metrics used are detailed on the platform.