Display model benchmark results
Measure over-refusal in LLMs using OR-Bench
Merge machine learning models using a YAML configuration file
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Push a ML model to Hugging Face Hub
View LLM Performance Leaderboard
Create and upload a Hugging Face model card
Submit deepfake detection models for evaluation
Browse and submit evaluations for CaselawQA benchmarks
Calculate VRAM requirements for LLM models
Submit models for evaluation and view leaderboard
Track, rank and evaluate open LLMs and chatbots
Evaluate LLM over-refusal rates with OR-Bench
The Redteaming Resistance Leaderboard is a tool designed for model benchmarking, specifically to evaluate and compare the performance of AI models in resisting adversarial attacks. It provides a comprehensive platform to display and analyze benchmark results, helping researchers and developers identify robust models capable of withstanding various adversarial scenarios.
• Leaderboard Display: Presents model benchmark results in a clear and structured format.
• Filtering Options: Allows users to narrow down results based on specific criteria.
• Detailed Metrics: Offers in-depth insights into model performance across different attack vectors.
• Visualization Tools: Includes charts and graphs to help users better understand the data.
• Export Data: Provides functionality to download results for further analysis.
What is the purpose of the Redteaming Resistance Leaderboard?
The leaderboard is designed to benchmark AI models based on their resistance to adversarial attacks, providing a clear comparison of their robustness and performance.
How often are the results updated?
Results are updated regularly as new models and datasets are added to the benchmarking platform.
Can I use the leaderboard for commercial purposes?
Yes, the leaderboard is available for public use, including commercial applications, provided proper attribution is made.