Browse and submit LLM evaluations
Display genomic embedding leaderboard
Convert Hugging Face models to OpenVINO format
Generate leaderboard comparing DNA models
View NSQL Scores for Models
Measure execution times of BERT models using WebGPU and WASM
Explore and visualize diverse models
Track, rank and evaluate open LLMs and chatbots
Launch web-based model application
Find and download models from Hugging Face
Evaluate AI-generated results for accuracy
Create and upload a Hugging Face model card
Browse and submit evaluations for CaselawQA benchmarks
The Open Medical-LLM Leaderboard is a comprehensive platform designed for benchmarking and comparing large language models (LLMs) specifically tailored for medical and healthcare applications. It provides a centralized hub where users can browse, evaluate, and submit their own model evaluations, fostering transparency and collaboration in the development of AI for medical use cases.
What types of medical applications are supported?
The Open Medical-LLM Leaderboard supports a wide range of medical applications, including clinical text analysis, medical question answering, and healthcare document summarization.
How do I submit my own model evaluation?
To submit your model evaluation, follow these steps:
Is the leaderboard open to non-experts?
Yes, the leaderboard is designed to be accessible to both experts and non-experts. Researchers, developers, and healthcare professionals can all benefit from the platform's resources and tools.