Explore and submit NER models
Search and save datasets generated with a LLM in real time
Generate detailed data profile reports
M-RewardBench Leaderboard
Uncensored General Intelligence Leaderboard
Browse and explore datasets from Hugging Face
statistics analysis for linear regression
Execute commands and visualize data
Parse bilibili bvid to aid / cid
More advanced and challenging multi-task evaluation
Analyze and visualize Hugging Face model download stats
VLMEvalKit Evaluation Results Collection
Visualize amino acid changes in protein sequences interactively
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.