Explore GenAI model efficiency on ML.ENERGY leaderboard
Convert Hugging Face models to OpenVINO format
Request model evaluation on COCO val 2017 dataset
GIFT-Eval: A Benchmark for General Time Series Forecasting
Quantize a model for faster inference
Submit deepfake detection models for evaluation
Merge Lora adapters with a base model
View and submit LLM benchmark evaluations
Upload a machine learning model to Hugging Face Hub
Benchmark models using PyTorch and OpenVINO
Run benchmarks on prediction models
Compare LLM performance across benchmarks
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
The ML.ENERGY Leaderboard is a platform designed to benchmark and compare the energy consumption and performance of various AI models. It provides a transparent and standardized way to evaluate the efficiency of different models, enabling users to make informed decisions about their implementations. The leaderboard focuses specifically on GenAI energy efficiency, helping developers and organizations identify models that balance performance with energy usage.
What is the ML.ENERGY Leaderboard?
The ML.ENERGY Leaderboard is a tool for benchmarking AI models based on their energy consumption and performance, helping users find efficient solutions.
How are models evaluated on the leaderboard?
Models are evaluated based on their energy consumption during inference and training, as well as their performance metrics such as accuracy and speed.
How often is the leaderboard updated?
The leaderboard is continuously updated with new models and data to reflect the latest advancements in AI research and development.