Evaluate LLM over-refusal rates with OR-Bench
Calculate survival probability based on passenger details
Measure over-refusal in LLMs using OR-Bench
View and submit LLM benchmark evaluations
Request model evaluation on COCO val 2017 dataset
Evaluate reward models for math reasoning
Measure BERT model performance using WASM and WebGPU
Run benchmarks on prediction models
Multilingual Text Embedding Model Pruner
Benchmark models using PyTorch and OpenVINO
Open Persian LLM Leaderboard
Convert Hugging Face model repo to Safetensors
Evaluate model predictions with TruLens
OR-Bench Leaderboard is a benchmarking tool designed to evaluate the performance of large language models (LLMs) with a specific focus on their over-refusal rates. It provides a comprehensive platform to assess how often LLMs refuse to provide answers, even when they should be capable of doing so. This metric is crucial for understanding model reliability and effectiveness in real-world applications.
• Over-refusal rate tracking: Measures how frequently LLMs decline to answer questions they should know. • Comparison across models: Allows users to compare multiple models based on refusal rates. • Real-time leaderboards: Provides up-to-date rankings of LLMs in a competitive format. • Interactive data exploration: Enables users to filter results by specific criteria like model size or dataset. • Transparency and reproducibility: Offers detailed methodologies and datasets for independent verification.
1. Why is OR-Bench Leaderboard important for evaluating LLMs?
OR-Bench Leaderboard is important because it helps identify models that are overly cautious, ensuring they provide meaningful answers rather than refusing when they have the capability to respond.
2. Can anyone submit their model to OR-Bench Leaderboard?
Yes, researchers and developers can submit their models for evaluation by following the submission guidelines provided on the platform.
3. How is the over-refusal rate calculated?
The over-refusal rate is calculated by evaluating how often a model refuses to answer questions it should reasonably be expected to answer, based on its training data and capabilities.
4. Does OR-Bench Leaderboard provide insights into model reliability?
Yes, the leaderboard offers insights into model reliability by highlighting how often models refuse to answer questions, helping users assess their practical effectiveness.
5. Are the datasets used for evaluation publicly accessible?
Yes, the datasets and evaluation methodologies used by OR-Bench Leaderboard are transparent and publicly accessible to ensure reproducibility and fairness.