Evaluate and submit AI model results for Frugal AI Challenge
Evaluate open LLMs in the languages of LATAM and Spain.
Compare code model performance on benchmarks
Convert Stable Diffusion checkpoint to Diffusers and open a PR
Browse and filter machine learning models by category and modality
SolidityBench Leaderboard
Track, rank and evaluate open LLMs and chatbots
Convert PaddleOCR models to ONNX format
Compare audio representation models using benchmark results
Measure execution times of BERT models using WebGPU and WASM
Display leaderboard for earthquake intent classification models
View and compare language model evaluations
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
Submission Portal is a unified platform designed to evaluate and submit AI model results for the Frugal AI Challenge. It provides a seamless interface for researchers and developers to showcase their model's performance, ensuring standardization and comparability across submissions.
• Result Submission: Easily upload your model's results for evaluation. • Benchmarking: Compare your model's performance against industry standards and other submissions. • Result Visualization: Access detailed metrics and graphs to analyze your model's strengths and weaknesses. • Data Security: Your submissions are protected with robust security measures. • User-Friendly Interface: Intuitive design ensures a smooth experience for all users.
What file formats are supported for submission?
The portal supports CSV, JSON, and ZIP formats. Ensure your files are properly formatted before submission.
How long does the evaluation process take?
Evaluations are typically completed within 24-48 hours, depending on the complexity of your submission.
Can I edit my submission after uploading?
No, submissions are final once uploaded. Ensure all details are accurate before submitting.