Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Merge Lora adapters with a base model
Multilingual Text Embedding Model Pruner
Upload ML model to Hugging Face Hub
View NSQL Scores for Models
View LLM Performance Leaderboard
Load AI models and prepare your space
Compare LLM performance across benchmarks
Explain GPU usage for model training
Calculate GPU requirements for running LLMs
Evaluate open LLMs in the languages of LATAM and Spain.
Optimize and train foundation models using IBM's FMS
Explore and visualize diverse models
Cetvel is a benchmarking tool designed to evaluate the performance of Turkish Large Language Models (LLMs). It provides a comprehensive framework for assessing model capabilities across various natural language processing tasks. Cetvel automates the evaluation process, enabling users to compare and analyze the performance of different models efficiently.
• Task Coverage: Evaluate models on a wide range of Turkish NLP tasks, including text classification, summarization, and question answering.
• Customizable Benchmarks: Tailor evaluation metrics and tasks to specific use cases.
• Detailed Performance Reports: Generate in-depth analysis of model strengths and weaknesses.
• Cross-Model Comparison: Compare multiple models side-by-side to identify the best performer for your needs.
• Easy Integration: Seamlessly integrate with popular Turkish LLMs for quick and accurate benchmarking.
What models does Cetvel support?
Cetvel supports a wide range of Turkish LLMs, including popular models like BERTurk, TTUM, and others. For the full list of supported models, refer to the official documentation.
How do I customize the benchmarking tasks?
Customization options are available through the Cetvel interface, where you can select specific tasks, datasets, and evaluation metrics to tailor the benchmarking process to your needs.
Where can I find Cetvel?
Cetvel is available for download on its official GitHub repository. Ensure you follow the installation instructions carefully to set up the tool correctly.