Browse and evaluate language models
SolidityBench Leaderboard
Explain GPU usage for model training
Measure execution times of BERT models using WebGPU and WASM
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Browse and submit model evaluations in LLM benchmarks
Submit deepfake detection models for evaluation
Load AI models and prepare your space
View and submit machine learning model evaluations
Convert and upload model files for Stable Diffusion
Merge Lora adapters with a base model
Export Hugging Face models to ONNX
Explore GenAI model efficiency on ML.ENERGY leaderboard
The Hebrew LLM Leaderboard is a comprehensive platform designed for benchmarking and evaluating language models specifically tailored for the Hebrew language. It provides a centralized repository where users can explore, compare, and analyze the performance of various large language models (LLMs) on Hebrew datasets and tasks. This tool is invaluable for researchers, developers, and professionals looking to identify the most suitable models for their Hebrew-based NLP applications.
What is the purpose of the Hebrew LLM Leaderboard?
The Hebrew LLM Leaderboard aims to simplify the process of identifying and evaluating language models for Hebrew-specific tasks, helping users make informed decisions.
How are the models evaluated?
Models are evaluated using standardized datasets and tasks specific to the Hebrew language, ensuring consistent and comparable results.
Is the Hebrew LLM Leaderboard suitable for non-experts?
Yes, the platform is designed to be user-friendly, with clear visualizations and explanations, making it accessible to both experts and non-experts.