Embedding Leaderboard
Display and explore model leaderboards and chat history
Detect AI-generated texts with precision
Ask questions about air quality data with pre-built prompts or your own queries
Check text for moderation flags
G2P
eRAG-Election: AI กกต. สนับสนุนความรู้การเลือกตั้ง ฯลฯ
Similarity
Open LLM(CohereForAI/c4ai-command-r7b-12-2024) and RAG
Deduplicate HuggingFace datasets in seconds
Generate relation triplets from text
Explore and interact with HuggingFace LLM APIs using Swagger UI
Upload a table to predict basalt source lithology, temperature, and pressure
The MTEB Leaderboard is a comprehensive platform designed for evaluating and comparing text embeddings across various models, benchmarks, and languages. It provides a standardized framework for assessing the performance of different embedding techniques, enabling researchers and developers to identify the most effective solutions for their specific use cases.
What benchmarks are available on the MTEB Leaderboard?
The MTEB Leaderboard supports a wide range of benchmarks tailored for specific tasks in text analysis, including but not limited to text classification, clustering, and information retrieval.
How do I interpret the scores on the leaderboard?
Scores are typically represented as performance metrics (e.g., accuracy, F1-score, or Spearman correlation) depending on the benchmark. Higher scores generally indicate better performance for the specific task.
Can I evaluate my custom model on the MTEB Leaderboard?
Yes, you can evaluate custom models by generating embeddings for the selected benchmarks and languages, and then uploading the results to the leaderboard for comparison.