Explore and submit NER models
Explore and compare LLM models through interactive leaderboards and submissions
Evaluate diversity in data sets to improve fairness
Display CLIP benchmark results for inference performance
Display color charts and diagrams
Calculate VRAM requirements for running large language models
Calculate and explore ecological data with ECOLOGITS
Generate a data profile report
Display a welcome message on a webpage
Analyze and visualize your dataset using AI
Browse and compare Indic language LLMs on a leaderboard
Finance chatbot using vectara-agentic
Make RAG evaluation dataset. 100% compatible to AutoRAG
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.