Browse and submit LLM evaluations
Measure over-refusal in LLMs using OR-Bench
Explore and submit models using the LLM Leaderboard
Upload a machine learning model to Hugging Face Hub
Convert Hugging Face model repo to Safetensors
Multilingual Text Embedding Model Pruner
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Launch web-based model application
Convert and upload model files for Stable Diffusion
Quantize a model for faster inference
Predict customer churn based on input details
Calculate memory needed to train AI models
Browse and filter ML model leaderboard data
The Open Medical-LLM Leaderboard is a comprehensive platform designed for benchmarking and comparing large language models (LLMs) specifically tailored for medical and healthcare applications. It provides a centralized hub where users can browse, evaluate, and submit their own model evaluations, fostering transparency and collaboration in the development of AI for medical use cases.
What types of medical applications are supported?
The Open Medical-LLM Leaderboard supports a wide range of medical applications, including clinical text analysis, medical question answering, and healthcare document summarization.
How do I submit my own model evaluation?
To submit your model evaluation, follow these steps:
Is the leaderboard open to non-experts?
Yes, the leaderboard is designed to be accessible to both experts and non-experts. Researchers, developers, and healthcare professionals can all benefit from the platform's resources and tools.