Browse and evaluate language models
Export Hugging Face models to ONNX
Load AI models and prepare your space
View and submit LLM benchmark evaluations
Display and submit LLM benchmarks
Retrain models for new data at edge devices
Convert Hugging Face model repo to Safetensors
Convert PyTorch models to waifu2x-ios format
Evaluate AI-generated results for accuracy
Explore and submit models using the LLM Leaderboard
Compare and rank LLMs using benchmark scores
Create demo spaces for models on Hugging Face
GIFT-Eval: A Benchmark for General Time Series Forecasting
The Hebrew LLM Leaderboard is a comprehensive platform designed for benchmarking and evaluating language models specifically tailored for the Hebrew language. It provides a centralized repository where users can explore, compare, and analyze the performance of various large language models (LLMs) on Hebrew datasets and tasks. This tool is invaluable for researchers, developers, and professionals looking to identify the most suitable models for their Hebrew-based NLP applications.
What is the purpose of the Hebrew LLM Leaderboard?
The Hebrew LLM Leaderboard aims to simplify the process of identifying and evaluating language models for Hebrew-specific tasks, helping users make informed decisions.
How are the models evaluated?
Models are evaluated using standardized datasets and tasks specific to the Hebrew language, ensuring consistent and comparable results.
Is the Hebrew LLM Leaderboard suitable for non-experts?
Yes, the platform is designed to be user-friendly, with clear visualizations and explanations, making it accessible to both experts and non-experts.