Explore and benchmark visual document retrieval models
Convert Hugging Face model repo to Safetensors
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
Text-To-Speech (TTS) Evaluation using objective metrics.
View RL Benchmark Reports
Measure BERT model performance using WASM and WebGPU
Calculate GPU requirements for running LLMs
View NSQL Scores for Models
Multilingual Text Embedding Model Pruner
Display and submit LLM benchmarks
Calculate memory needed to train AI models
Evaluate open LLMs in the languages of LATAM and Spain.
Browse and submit evaluations for CaselawQA benchmarks
Vidore Leaderboard is a tool designed for exploring and benchmarking visual document retrieval models. It provides a platform to compare and evaluate the performance of different models in the domain of visual document retrieval, helping users understand their strengths and weaknesses.
• Comprehensive Model Database: Access a wide range of pre-trained models for visual document retrieval. • Customizable Benchmarking: Define custom benchmarks to evaluate models based on specific criteria. • Performance Metrics: Detailed metrics to assess model accuracy, efficiency, and robustness. • Visual Results: Interactive visualizations to compare model performance side-by-side. • Community Sharing: Share benchmark results and insights with the broader AI research community.
What is visual document retrieval?
Visual document retrieval involves systems that retrieve documents based on visual content, such as images or layouts, rather than text-based search.
How do I interpret the performance metrics?
Performance metrics are provided in an easy-to-understand format, with visual charts and numerical scores to help compare model effectiveness.
Can I use Vidore Leaderboard for non-public models?
Yes, Vidore Leaderboard supports benchmarking private models by uploading them through the platform or API.