Submit deepfake detection models for evaluation
Submit models for evaluation and view leaderboard
View NSQL Scores for Models
Measure execution times of BERT models using WebGPU and WASM
Evaluate reward models for math reasoning
Retrain models for new data at edge devices
Predict customer churn based on input details
Explore GenAI model efficiency on ML.ENERGY leaderboard
Track, rank and evaluate open LLMs and chatbots
Explore and benchmark visual document retrieval models
Text-To-Speech (TTS) Evaluation using objective metrics.
Display leaderboard for earthquake intent classification models
Run benchmarks on prediction models
The Deepfake Detection Arena Leaderboard is a platform designed for benchmarking and evaluating deepfake detection models. It allows researchers and developers to submit their models for evaluation against a variety of deepfake datasets and scenarios. The leaderboard provides a community-driven space for comparing model performance and fostering advancements in detecting synthetic media.
• Model Submission: Submit deepfake detection models for evaluation
• Standardized Metrics: Metrics like accuracy, precision, recall, and F1-score are used to rank models
• Benchmark Datasets: Access to diverse datasets to test model robustness
• Leaderboard Ranking: Transparent ranking system to compare model performance
• Continuous Feedback: Detailed performance reports for model improvement
• Community Engagement: Forum for discussions and knowledge sharing among participants
• Regular Updates: Periodic updates with new datasets and evaluation criteria
Prepare Your Model
Register on the Platform
Submit Your Model
Evaluate Against Benchmarks
View Results
What types of deepfake detection models can I submit?
You can submit models built using any machine learning framework or architecture, as long as they adhere to the submission guidelines.
How are models evaluated on the leaderboard?
Models are evaluated using standardized metrics such as accuracy, precision, recall, and F1-score. These metrics are calculated based on performance against benchmark datasets.
Can I access the datasets used for evaluation?
Yes, the benchmark datasets are available for download through the platform. They are designed to represent diverse and challenging scenarios for deepfake detection.