Explore and submit NER models
Submit evaluations for speaker tagging and view leaderboard
Monitor application health
Check system health
Search and save datasets generated with a LLM in real time
Make RAG evaluation dataset. 100% compatible to AutoRAG
Browse and compare Indic language LLMs on a leaderboard
https://huggingface.co/spaces/VIDraft/mouse-webgen
Generate detailed data profile reports
Execute commands and visualize data
Analyze and compare datasets, upload reports to Hugging Face
World warming land sites
Transfer GitHub repositories to Hugging Face Spaces
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.