Access and submit models to an Egyptian Arabic translation leaderboard
Convert (almost) everything to PDF!
Display information from a Markdown file
Extract bibliographic data from PDFs
Analyze app performance with metrics
Display documentation for Hugging Face Spaces config
Edit a README.md file for an organization card
Search ChatGPT-related repositories
Generate PDFs for medical documents
Convert PDFs to DOCX with layout parsing
Analyze document layout from images
Create a custom PDF CV from Markdown and image
Ask questions about a PDF file
The English To Egyptian Arabic Translation Leaderboard is a platform designed for evaluating and comparing machine translation models that translate English text into Egyptian Arabic. It provides a centralized space for researchers, developers, and language professionals to assess the performance of their translation models against industry benchmarks and competing systems. The leaderboard enables users to submit their models, track their performance, and gain insights into areas of improvement.
What evaluation metrics are used on the leaderboard?
The leaderboard uses industry-standard metrics such as BLEU score, ROUGE score, and METEOR score to evaluate translation quality.
How do I submit my model for evaluation?
Submissions can be made via the platform's web interface or by providing an API endpoint. Detailed instructions are available on the leaderboard's documentation page.
Can I customize the evaluation criteria for my model?
Yes, the platform allows users to customize evaluation criteria to suit specific use cases or requirements.
What if I encounter issues during submission or evaluation?
If you encounter any issues, contact the support team via the platform's help section or email [email protected].