View and submit LLM benchmark evaluations
GIFT-Eval: A Benchmark for General Time Series Forecasting
Browse and evaluate ML tasks in MLIP Arena
Determine GPU requirements for large language models
Merge machine learning models using a YAML configuration file
Display model benchmark results
Evaluate adversarial robustness using generative models
Measure BERT model performance using WASM and WebGPU
Leaderboard of information retrieval models in French
View and compare language model evaluations
Predict customer churn based on input details
Benchmark models using PyTorch and OpenVINO
Display benchmark results
The Russian LLM Leaderboard is a benchmarking platform designed to evaluate and compare large language models (LLMs) specifically for the Russian language. It provides a comprehensive overview of model performance, enabling users to view and submit evaluations for various LLMs. This tool serves as a valuable resource for researchers, developers, and enthusiasts interested in understanding the capabilities of Russian language models.
What types of models are included in the Russian LLM Leaderboard?
The leaderboard includes a wide range of LLMs, from open-source models to commercial offerings, as long as they support the Russian language and have been benchmarked according to the platform's criteria.
How can I submit my own LLM for evaluation?
To submit your model, navigate to the submission section of the leaderboard and follow the provided guidelines. Ensure your model meets the specified requirements for benchmarking on Russian language tasks.
What factors influence the benchmarking scores?
Scores are influenced by performance on tasks such as text generation, question-answering, translation, and other linguistic benchmarks. The specific datasets and evaluation metrics used are detailed on the platform.