Browse and submit model evaluations in LLM benchmarks
Browse and submit evaluations for CaselawQA benchmarks
Display benchmark results
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Run benchmarks on prediction models
Compare code model performance on benchmarks
Benchmark models using PyTorch and OpenVINO
Merge machine learning models using a YAML configuration file
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
Multilingual Text Embedding Model Pruner
View LLM Performance Leaderboard
Compare and rank LLMs using benchmark scores
Compare LLM performance across benchmarks
The OpenLLM Turkish leaderboard v0.2 is a platform designed for model benchmarking and evaluation, specifically tailored for Turkish language models. It allows users to browse, compare, and submit evaluations of various large language models (LLMs) on Turkish benchmarks. This tool facilitates the analysis of model performance across different tasks and datasets, helping researchers and practitioners identify the best-performing models for their specific needs.
• Model Benchmarking: Evaluate and compare the performance of different Turkish language models.
• Submission Interface: Easily submit your own model evaluations for inclusion in the leaderboard.
• Filtering and Sorting: Filter models by performance metrics, dataset, or task type.
• Detailed Model Comparisons: View side-by-side comparisons of model performance across multiple benchmarks.
• Visualizations: Access charts and graphs to understand performance trends and differences.
• Documentation: Get access to resources and guides for using the leaderboard effectively.
1. What types of models are included in the leaderboard?
The leaderboard includes a variety of Turkish language models, ranging from small-scale to state-of-the-art models, evaluated on diverse tasks and datasets.
2. How are models evaluated on the leaderboard?
Models are evaluated based on standard benchmarks and metrics relevant to Turkish language tasks, such as perplexity, BLEU score, or accuracy on specific datasets.
3. Can I submit my own model for evaluation?
Yes, the leaderboard provides a submission interface where you can upload your model’s evaluation results after preparing them according to the platform’s guidelines.