Generative Tasks Evaluation of Arabic LLMs
Determine emotion from text
This is for learning purpose, don't take it seriously :)
Type an idea, get related quotes from historic figures
Deduplicate HuggingFace datasets in seconds
Extract... key phrases from text
Calculate patentability score from application
Display and filter LLM benchmark results
Analyze similarity of patent claims and responses
Detect if text was generated by GPT-2
Identify AI-generated text
Experiment with and compare different tokenizers
Track, rank and evaluate open Arabic LLMs and chatbots
AraGen Leaderboard is a comprehensive evaluation platform designed for assessing the performance of Arabic large language models (LLMs) in generative tasks. It provides a transparent and standardized framework to benchmark and compare different models based on their capabilities, accuracy, and effectiveness in generating Arabic text. The platform serves as a valuable resource for researchers, developers, and users to track advancements in Arabic NLP and identify top-performing models.
• Comprehensive Evaluation Metrics: Assesses models across a variety of tasks, including text generation, summarization, and conversational dialogue.
• Benchmarking Capabilities: Allows for direct comparison of different Arabic LLMs using standardized benchmarks.
• Real-Time Updates: Reflects the latest advancements in Arabic LLMs with regular updates to the leaderboard.
• Customizable Filters: Enables users to filter results based on specific criteria such as model size, training data, or tasks.
• Transparency in Scoring: Provides detailed insights into evaluation methodologies and scoring systems for full accountability.
• Community Engagement: Facilitates collaboration and discussion among researchers and developers to foster innovation.
1. How often is the AraGen Leaderboard updated?
The AraGen Leaderboard is updated regularly to reflect new models, improvements in existing models, and advancements in evaluation methodologies.
2. Can I submit my own model for evaluation?
Yes, the AraGen Leaderboard encourages submissions from developers. Please refer to the submission guidelines on the platform for details on how to participate.
3. What criteria are used to evaluate the models?
The models are evaluated based on a range of tasks, including but not limited to text generation, summarization, and conversational dialogue, using standardized metrics and benchmarks.