Generative Tasks Evaluation of Arabic LLMs
Detect harms and risks with Granite Guardian 3.1 8B
Open LLM(CohereForAI/c4ai-command-r7b-12-2024) and RAG
Classify text into categories
Give URL get details about the company
fake news detection using distilbert trained on liar dataset
Encode and decode Hindi text using BPE
Explore and interact with HuggingFace LLM APIs using Swagger UI
Detect emotions in text sentences
Detect if text was generated by GPT-2
Identify named entities in text
ModernBERT for reasoning and zero-shot classification
Analyze content to detect triggers
AraGen Leaderboard is a comprehensive evaluation platform designed for assessing the performance of Arabic large language models (LLMs) in generative tasks. It provides a transparent and standardized framework to benchmark and compare different models based on their capabilities, accuracy, and effectiveness in generating Arabic text. The platform serves as a valuable resource for researchers, developers, and users to track advancements in Arabic NLP and identify top-performing models.
• Comprehensive Evaluation Metrics: Assesses models across a variety of tasks, including text generation, summarization, and conversational dialogue.
• Benchmarking Capabilities: Allows for direct comparison of different Arabic LLMs using standardized benchmarks.
• Real-Time Updates: Reflects the latest advancements in Arabic LLMs with regular updates to the leaderboard.
• Customizable Filters: Enables users to filter results based on specific criteria such as model size, training data, or tasks.
• Transparency in Scoring: Provides detailed insights into evaluation methodologies and scoring systems for full accountability.
• Community Engagement: Facilitates collaboration and discussion among researchers and developers to foster innovation.
1. How often is the AraGen Leaderboard updated?
The AraGen Leaderboard is updated regularly to reflect new models, improvements in existing models, and advancements in evaluation methodologies.
2. Can I submit my own model for evaluation?
Yes, the AraGen Leaderboard encourages submissions from developers. Please refer to the submission guidelines on the platform for details on how to participate.
3. What criteria are used to evaluate the models?
The models are evaluated based on a range of tasks, including but not limited to text generation, summarization, and conversational dialogue, using standardized metrics and benchmarks.