Leaderboard for text-to-video generation models
Explore and submit NER models
Analyze weekly and daily trader performance in Olas Predict
Filter and view AI model leaderboard data
Browse and filter AI model evaluation results
Parse bilibili bvid to aid / cid
More advanced and challenging multi-task evaluation
Classify breast cancer risk based on cell features
Generate a detailed dataset report
Generate benchmark plots for text generation models
This project is a GUI for the gpustack/gguf-parser-go
Explore and compare LLM models through interactive leaderboards and submissions
Embed and use ZeroEval for evaluation tasks
VideoScore Leaderboard is a data visualization tool designed to evaluate and compare the performance of text-to-video generation models. It provides a centralized platform to display leaderboard tables with video scores and evaluation data, helping users track model performance, identify top-performing models, and gain insights into model strengths and weaknesses.
• Interactive Tables: Sort and filter data to focus on specific models or metrics.
• Customizable Filters: Narrow down results by evaluation criteria, model names, or date ranges.
• Real-Time Updates: Stay current with the latest model evaluations and scores.
• Visual Analytics: Gain deeper insights with charts and graphs that highlight performance trends.
• Model Comparison: Directly compare multiple models side-by-side for comprehensive analysis.
• Export Options: Download data in various formats for offline reviewing or reporting.
What models are supported by VideoScore Leaderboard?
VideoScore Leaderboard supports a wide range of text-to-video generation models, including popular ones like Pika Labs, Kaiber, and Pix2Vid.
Can I customize the metrics displayed in the leaderboard?
Yes, you can customize the metrics displayed by using the filtering options available in the tool. This allows you to focus on the specific evaluation criteria you care about.
How frequently is the leaderboard updated?
The leaderboard is updated in real-time as new evaluation data becomes available, ensuring you always have the most current information on model performance.