Submit deepfake detection models for evaluation
View and submit machine learning model evaluations
Export Hugging Face models to ONNX
Display genomic embedding leaderboard
Measure execution times of BERT models using WebGPU and WASM
View RL Benchmark Reports
Find and download models from Hugging Face
Create and manage ML pipelines with ZenML Dashboard
Evaluate and submit AI model results for Frugal AI Challenge
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Browse and evaluate ML tasks in MLIP Arena
Merge machine learning models using a YAML configuration file
Retrain models for new data at edge devices
The Deepfake Detection Arena Leaderboard is a platform designed for benchmarking and evaluating deepfake detection models. It allows researchers and developers to submit their models for evaluation against a variety of deepfake datasets and scenarios. The leaderboard provides a community-driven space for comparing model performance and fostering advancements in detecting synthetic media.
• Model Submission: Submit deepfake detection models for evaluation
• Standardized Metrics: Metrics like accuracy, precision, recall, and F1-score are used to rank models
• Benchmark Datasets: Access to diverse datasets to test model robustness
• Leaderboard Ranking: Transparent ranking system to compare model performance
• Continuous Feedback: Detailed performance reports for model improvement
• Community Engagement: Forum for discussions and knowledge sharing among participants
• Regular Updates: Periodic updates with new datasets and evaluation criteria
Prepare Your Model
Register on the Platform
Submit Your Model
Evaluate Against Benchmarks
View Results
What types of deepfake detection models can I submit?
You can submit models built using any machine learning framework or architecture, as long as they adhere to the submission guidelines.
How are models evaluated on the leaderboard?
Models are evaluated using standardized metrics such as accuracy, precision, recall, and F1-score. These metrics are calculated based on performance against benchmark datasets.
Can I access the datasets used for evaluation?
Yes, the benchmark datasets are available for download through the platform. They are designed to represent diverse and challenging scenarios for deepfake detection.