Explore and filter language model benchmark results
ModernBERT for reasoning and zero-shot classification
Generate keywords from text
Explore BERT model interactions
Parse and highlight entities in an email thread
Encode and decode Hindi text using BPE
Compare LLMs by role stability
Compare different tokenizers in char-level and byte-level.
Submit model predictions and view leaderboard results
Search for similar AI-generated patent abstracts
Test your attribute inference skills with comments
Display and filter LLM benchmark results
Explore and Learn ML basics
Open Ko-LLM Leaderboard is a web-based platform designed for exploring and filtering benchmark results of language models (LLMs). It focuses on providing a comprehensive overview of model performance, particularly for Korean language models, enabling users to compare and evaluate different models based on various metrics and criteria.
• Benchmark Summaries: Access detailed performance metrics of various language models. • Advanced Filtering: Filter models by parameters like model size, architecture, and training data. • Performance Metrics: View metrics such as perplexity, accuracy, and F1-score across different tasks. • Model Comparison: Compare multiple models side-by-side to identify strengths and weaknesses. • Regular Updates: Stay informed with the latest benchmark results as new models are released. • User-Friendly Interface: Intuitive design for easy navigation and finding relevant information.
What is the purpose of the Open Ko-LLM Leaderboard?
The leaderboard aims to provide a centralized platform for comparing and evaluating the performance of Korean language models across various tasks and metrics.
How often is the leaderboard updated?
The leaderboard is updated regularly as new models are released and benchmarked.
Can I use the leaderboard for model selection?
Yes, the leaderboard is designed to help users select models based on specific requirements by providing detailed performance metrics and comparisons.