Generative Tasks Evaluation of Arabic LLMs
Display and explore model leaderboards and chat history
Classify patent abstracts into subsectors
Identify named entities in text
Give URL get details about the company
Generate answers by querying text in uploaded documents
Humanize AI-generated text to sound like it was written by a human
Parse and highlight entities in an email thread
Generate vector representations from text
Analyze Ancient Greek text for syntax and named entities
Optimize prompts using AI-driven enhancement
Detect harms and risks with Granite Guardian 3.1 8B
Experiment with and compare different tokenizers
AraGen Leaderboard is a comprehensive evaluation platform designed for assessing the performance of Arabic large language models (LLMs) in generative tasks. It provides a transparent and standardized framework to benchmark and compare different models based on their capabilities, accuracy, and effectiveness in generating Arabic text. The platform serves as a valuable resource for researchers, developers, and users to track advancements in Arabic NLP and identify top-performing models.
• Comprehensive Evaluation Metrics: Assesses models across a variety of tasks, including text generation, summarization, and conversational dialogue.
• Benchmarking Capabilities: Allows for direct comparison of different Arabic LLMs using standardized benchmarks.
• Real-Time Updates: Reflects the latest advancements in Arabic LLMs with regular updates to the leaderboard.
• Customizable Filters: Enables users to filter results based on specific criteria such as model size, training data, or tasks.
• Transparency in Scoring: Provides detailed insights into evaluation methodologies and scoring systems for full accountability.
• Community Engagement: Facilitates collaboration and discussion among researchers and developers to foster innovation.
1. How often is the AraGen Leaderboard updated?
The AraGen Leaderboard is updated regularly to reflect new models, improvements in existing models, and advancements in evaluation methodologies.
2. Can I submit my own model for evaluation?
Yes, the AraGen Leaderboard encourages submissions from developers. Please refer to the submission guidelines on the platform for details on how to participate.
3. What criteria are used to evaluate the models?
The models are evaluated based on a range of tasks, including but not limited to text generation, summarization, and conversational dialogue, using standardized metrics and benchmarks.