Submit evaluations for speaker tagging and view leaderboard
Generate a detailed dataset report
More advanced and challenging multi-task evaluation
Check system health
Profile a dataset and publish the report on Hugging Face
NSFW Text Generator for Detecting NSFW Text
Generate a data report using the pandas-profiling tool
Search and save datasets generated with a LLM in real time
This project is a GUI for the gpustack/gguf-parser-go
Predict linear relationships between numbers
https://huggingface.co/spaces/VIDraft/mouse-webgen
This is AI app that help to chat with your CSV & Excel.
Display a treemap of languages and datasets
The Post-ASR LLM based Speaker Tagging Leaderboard is a data visualization tool designed to evaluate and compare the performance of speaker tagging models. It focuses on post-automatic speech recognition (ASR) scenarios, leveraging large language models (LLMs) to identify and tag speakers in audio or text data. This leaderboard provides a platform for researchers and developers to submit evaluations, track performance metrics, and compare results with other state-of-the-art models.
• Model Evaluation Submission: Allows users to submit their speaker tagging model evaluations for benchmarking.
• Performance Tracking: Displays detailed performance metrics such as accuracy, precision, recall, and F1-score.
• Leaderboard Visualization: Presents results in a clear, sortable leaderboard format for easy comparison.
• Support for LLMs: Compatible with various large language models to enhance speaker tagging accuracy.
• Real-Time Updates: Provides up-to-date rankings and performance data as new submissions are added.
• Customizable Filters: Enables filtering of results based on specific models, datasets, or evaluation criteria.
What metrics are used to evaluate speaker tagging models on this leaderboard?
The leaderboard uses standard metrics such as accuracy, precision, recall, and F1-score to evaluate speaker tagging performance.
Can I use any LLM for speaker tagging on this platform?
Yes, the platform supports evaluations using any large language model (LLM) as long as the results are formatted according to the submission guidelines.
How often are the leaderboard rankings updated?
The rankings are updated in real-time as new submissions are processed and verified by the platform.