Submit evaluations for speaker tagging and view leaderboard
Analyze and visualize car data
Predict linear relationships between numbers
View monthly arXiv download trends since 1994
Explore income data with an interactive visualization tool
Open Agent Leaderboard
Calculate and explore ecological data with ECOLOGITS
Display a welcome message on a webpage
Analyze and compare datasets, upload reports to Hugging Face
Calculate VRAM requirements for running large language models
Detect bank fraud without revealing personal data
Analyze data to generate a comprehensive profile report
Explore and compare LLM models through interactive leaderboards and submissions
The Post-ASR LLM based Speaker Tagging Leaderboard is a data visualization tool designed to evaluate and compare the performance of speaker tagging models. It focuses on post-automatic speech recognition (ASR) scenarios, leveraging large language models (LLMs) to identify and tag speakers in audio or text data. This leaderboard provides a platform for researchers and developers to submit evaluations, track performance metrics, and compare results with other state-of-the-art models.
• Model Evaluation Submission: Allows users to submit their speaker tagging model evaluations for benchmarking.
• Performance Tracking: Displays detailed performance metrics such as accuracy, precision, recall, and F1-score.
• Leaderboard Visualization: Presents results in a clear, sortable leaderboard format for easy comparison.
• Support for LLMs: Compatible with various large language models to enhance speaker tagging accuracy.
• Real-Time Updates: Provides up-to-date rankings and performance data as new submissions are added.
• Customizable Filters: Enables filtering of results based on specific models, datasets, or evaluation criteria.
What metrics are used to evaluate speaker tagging models on this leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance.
Can I use any LLM for speaker tagging on this platform?
Yes, the platform supports evaluations using any large language model (LLM) as long as the results are formatted according to the submission guidelines.
How often are the leaderboard rankings updated?
The rankings are updated in real-time as new submissions are processed and verified by the platform.