SolidityBench Leaderboard
Benchmark models using PyTorch and OpenVINO
Download a TriplaneGaussian model checkpoint
Launch web-based model application
Retrain models for new data at edge devices
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Upload ML model to Hugging Face Hub
View RL Benchmark Reports
Request model evaluation on COCO val 2017 dataset
Export Hugging Face models to ONNX
Generate and view leaderboard for LLM evaluations
Convert and upload model files for Stable Diffusion
Measure execution times of BERT models using WebGPU and WASM
SolidityBench Leaderboard is a benchmarking tool designed to rank and compare language models within the Model Benchmarking category. It provides a platform to evaluate and submit language models, allowing developers and researchers to assess their performance against industry standards and competing models.
• Support for multiple language models: Compare various models side-by-side.
• Customizable benchmarks: Define specific testing criteria and scenarios.
• Real-time updates: Stay informed with the latest model performances.
• Detailed result visualization: Access graphs, charts, and other visual representations of model performance.
• Submission portal: Easily submit your own model for benchmarking and inclusion in the leaderboard.
What is the purpose of SolidityBench Leaderboard?
The purpose is to provide a standardized platform for comparing language models, helping researchers and developers identify top-performing models for specific tasks.
How do I submit my language model for benchmarking?
Submit your model through the platform's submission portal, ensuring it meets the specified requirements and guidelines.
Can I create custom benchmarks for my specific use case?
Yes, SolidityBench Leaderboard allows users to define custom benchmarks tailored to their needs, enabling more relevant performance evaluations.