Explore and submit NER models
Leaderboard for text-to-video generation models
Generate synthetic dataset files (JSON Lines)
Generate detailed data profile reports
Generate a detailed dataset report
Migrate datasets from GitHub or Kaggle to Hugging Face Hub
Analyze and visualize your dataset using AI
Generate detailed data reports
View and compare pass@k metrics for AI models
NSFW Text Generator for Detecting NSFW Text
Generate benchmark plots for text generation models
Generate images based on data
Analyze your dataset with guided tools
Clinical NER Leaderboard is a platform designed to evaluate and compare Named Entity Recognition (NER) models specifically within the clinical domain. It serves as a centralized hub for researchers and developers to explore, submit, and benchmark their NER models. The leaderboard provides transparency into model performance, fostering innovation and advancements in clinical NLP.
What types of models can I submit?
You can submit any NER model designed for clinical text processing, including rule-based, machine learning, and deep learning models.
How are models evaluated?
Models are evaluated using standardized metrics such as precision, recall, F1-score, and throughput on curated clinical datasets.
Can I access the datasets used for benchmarking?
Yes, the datasets used for benchmarking are available for download, allowing you to train and fine-tune your models effectively.