Explore and submit NER models
Display a treemap of languages and datasets
Analyze data using Pandas Profiling
Select and analyze data subsets
Calculate VRAM requirements for running large language models
Generate detailed data reports
Browse and submit evaluation results for AI benchmarks
Check your progress in a Deep RL course
Explore and filter model evaluation results
Explore and compare LLM models through interactive leaderboards and submissions
Calculate and explore ecological data with ECOLOGITS
Analyze and compare datasets, upload reports to Hugging Face
Finance chatbot using vectara-agentic
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.