Browse and evaluate language models
View and submit language model evaluations
Text-To-Speech (TTS) Evaluation using objective metrics.
Generate leaderboard comparing DNA models
Merge Lora adapters with a base model
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Submit models for evaluation and view leaderboard
Display leaderboard of language model evaluations
Display genomic embedding leaderboard
Teach, test, evaluate language models with MTEB Arena
Evaluate and submit AI model results for Frugal AI Challenge
Compare LLM performance across benchmarks
Optimize and train foundation models using IBM's FMS
The Hebrew LLM Leaderboard is a comprehensive platform designed for benchmarking and evaluating language models specifically tailored for the Hebrew language. It provides a centralized repository where users can explore, compare, and analyze the performance of various large language models (LLMs) on Hebrew datasets and tasks. This tool is invaluable for researchers, developers, and professionals looking to identify the most suitable models for their Hebrew-based NLP applications.
What is the purpose of the Hebrew LLM Leaderboard?
The Hebrew LLM Leaderboard aims to simplify the process of identifying and evaluating language models for Hebrew-specific tasks, helping users make informed decisions.
How are the models evaluated?
Models are evaluated using standardized datasets and tasks specific to the Hebrew language, ensuring consistent and comparable results.
Is the Hebrew LLM Leaderboard suitable for non-experts?
Yes, the platform is designed to be user-friendly, with clear visualizations and explanations, making it accessible to both experts and non-experts.