Explore and submit NER models
Search and save datasets generated with a LLM in real time
Browse and compare Indic language LLMs on a leaderboard
VLMEvalKit Evaluation Results Collection
Analyze and visualize car data
What happened in open-source AI this year, and whatβs next?
Explore speech recognition model performance
Uncensored General Intelligence Leaderboard
Compare classifier performance on datasets
Execute commands and visualize data
Make RAG evaluation dataset. 100% compatible to AutoRAG
Generate financial charts from stock data
Predict linear relationships between numbers
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.