Submit evaluations for speaker tagging and view leaderboard
Generate plots for GP and PFN posterior approximations
Calculate and explore ecological data with ECOLOGITS
Build, preprocess, and train machine learning models
Explore and compare LLM models through interactive leaderboards and submissions
Uncensored General Intelligence Leaderboard
Analyze and visualize data with various statistical methods
Browse and compare Indic language LLMs on a leaderboard
Transfer GitHub repositories to Hugging Face Spaces
Predict soil shear strength using input parameters
Open Agent Leaderboard
Analyze weekly and daily trader performance in Olas Predict
Monitor application health
The Post-ASR LLM based Speaker Tagging Leaderboard is a data visualization tool designed to evaluate and compare the performance of speaker tagging models. It focuses on post-automatic speech recognition (ASR) scenarios, leveraging large language models (LLMs) to identify and tag speakers in audio or text data. This leaderboard provides a platform for researchers and developers to submit evaluations, track performance metrics, and compare results with other state-of-the-art models.
• Model Evaluation Submission: Allows users to submit their speaker tagging model evaluations for benchmarking.
• Performance Tracking: Displays detailed performance metrics such as accuracy, precision, recall, and F1-score.
• Leaderboard Visualization: Presents results in a clear, sortable leaderboard format for easy comparison.
• Support for LLMs: Compatible with various large language models to enhance speaker tagging accuracy.
• Real-Time Updates: Provides up-to-date rankings and performance data as new submissions are added.
• Customizable Filters: Enables filtering of results based on specific models, datasets, or evaluation criteria.
What metrics are used to evaluate speaker tagging models on this leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance.
Can I use any LLM for speaker tagging on this platform?
Yes, the platform supports evaluations using any large language model (LLM) as long as the results are formatted according to the submission guidelines.
How often are the leaderboard rankings updated?
The rankings are updated in real-time as new submissions are processed and verified by the platform.