Explore and filter model evaluation results
Browse and filter AI model evaluation results
Browse and submit evaluation results for AI benchmarks
VLMEvalKit Evaluation Results Collection
Explore speech recognition model performance
Need to analyze data? Let a Llama-3.1 agent do it for you!
Display color charts and diagrams
Display CLIP benchmark results for inference performance
Generate a data profile report
Analyze and visualize your dataset using AI
What happened in open-source AI this year, and what’s next?
Analyze weekly and daily trader performance in Olas Predict
World warming land sites
GTBench is a data visualization tool designed to help users explore and filter model evaluation results. It provides an interactive interface to analyze and compare performance metrics of different models, enabling deeper insights into their effectiveness.
• Interactive Visualization: Explore model performance through dynamic and customizable visualizations. • Advanced Filtering: Apply filters to narrow down results based on specific criteria such as model type, dataset, or performance metrics. • Real-Time Updates: Get instant feedback as you adjust filters or visualization settings. • Multi-Model Support: Compare results from multiple models in a single interface. • Customizable Dashboards: Tailor the layout to focus on the metrics that matter most. • Export Capabilities: Save and share visualizations or raw data for further analysis.
What does GTBench stand for?
GTBench stands for Graph Tool Benchmark, a utility for analyzing and visualizing model evaluation data.
Can I use GTBench for models other than graphs?
Yes, GTBench supports a variety of model types, including but not limited to graph-based models.
How do I export visualization results from GTBench?
To export results, use the "Export" button in the toolbar, which allows you to save visualizations as images or raw data as CSV files.