View and submit language model evaluations
Merge Lora adapters with a base model
Convert PyTorch models to waifu2x-ios format
Compare code model performance on benchmarks
Display leaderboard for earthquake intent classification models
Push a ML model to Hugging Face Hub
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
Display genomic embedding leaderboard
Create and upload a Hugging Face model card
Explore and visualize diverse models
View NSQL Scores for Models
Calculate memory needed to train AI models
Launch web-based model application
ContextualBench-Leaderboard is a platform designed for benchmarking and evaluating language models. It provides a centralized space to view and compare the performance of different models across various tasks and datasets. Users can submit their own model evaluations and track progress in the field of natural language processing.
• Comprehensive Leaderboard: Displays performance metrics of language models in a sorted and searchable format.
• Submission Portal: Allows researchers to upload their model evaluations for inclusion in the leaderboard.
• Comparison Tools: Enables side-by-side comparison of models based on specific benchmarks or datasets.
• Filtering Options: Users can filter results by model type, dataset, or performance metric.
• Real-Time Updates: The leaderboard is updated regularly to reflect the latest submissions and advancements.
• Documentation and Guides: Provides resources for understanding evaluation metrics and submission processes.
What models are included in ContextualBench-Leaderboard?
The leaderboard includes a wide range of language models, from state-of-the-art models to smaller, specialized models. The exact list is updated regularly.
How do I submit my model for evaluation?
To submit your model, navigate to the submission portal on the ContextualBench-Leaderboard website and follow the detailed guidelines provided. Ensure your submission includes all required metrics and information.
Why should I use ContextualBench-Leaderboard?
ContextualBench-Leaderboard offers a user-friendly interface and comprehensive tools for comparing and analyzing language models. It is an excellent resource for researchers and developers looking to benchmark their models or stay informed about the latest advancements in the field.