Inception, a G42 company in collaboration with the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) announced the launch of AraGen Leaderboard, a framework designed to redefine the evaluation of Arabic Large Language Models (LLMs). Powered by the new internally developed 3C3H metric, this framework delivers a transparent, robust, and holistic evaluation system that balances factual accuracy and usability, setting new standards for Arabic Natural Language Processing (NLP).
Serving over 400 million Arabic speakers worldwide, the AraGen Leaderboard addresses critical gaps in AI evaluation by offering a meticulously constructed evaluation dataset tailored to the unique linguistic and cultural intricacies of the Arabic language and region. The dynamic nature of this leaderboard tackles challenges such as benchmark leakage, reproducibility issues, and the absence of holistic metrics to evaluate both core knowledge and practical utility.
The introduction of generative tasks represents a groundbreaking advancement for Arabic LLMs, offering a new dimension to the evaluation process. Unlike traditional leaderboards that primarily focused on static, likelihood accuracy-based benchmarks, which fail to capture real-world performance, AraGen’s Leaderboard addresses these limitations. This highlights the transformative impact of the new benchmark in fostering AI innovation and enhancing model performance.
“The AraGen Leaderboard redefines Arabic LLM evaluation, setting a new standard for fairness, inclusivity, and innovation,” said Andrew Jackson, CEO of Inception. “By addressing the gaps in previous benchmarks and introducing generative tasks, the platform empowers researchers, developers, and organizations to create culturally aligned AI technologies. AraGen ensures transparency, reproducibility, and trust while advancing the global NLP landscape.”
The AraGen Leaderboard evaluates models across six dimensions: correctness, completeness, conciseness, helpfulness, honesty, and harmlessness. Featuring 279 questions across tasks like Arabic grammar, general Q&A, reasoning, and safety, it prioritizes the needs of Arabic speakers. Quarterly updates keep the leaderboard relevant while inviting public submissions to enhance model refinement and foster growth in the Arabic AI ecosystem.
“AraGen is a major step towards open, collaborative, and reproducible evaluation of large language models for Arabic, with focus on their text generation capabilities. This contrasts with popular leaderboards, which rely primarily on multiple-choice questions. Moreover, AraGen is a dynamic board with new questions every three months, which makes it much harder to game compared to existing leaderboards,” said Professor Preslav Nakov, Department Chair of Natural Language Processing and Professor of Natural Language Processing, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).
“Our goal was to create a benchmark that introduces generative task evaluation with a strong emphasis on transparency, reproducibility, and a rigorous measurement of models’ performances,” said Ali El Filali, Machine Learning Engineer at Inception and lead author of this work. “By evaluating models across multiple dimensions to assess both factuality and usability, the AraGen Leaderboard provides actionable insights for diverse NLP tasks. This empowers the Arabic AI community to develop safe and high-performing models for real-world needs that are important to our region. Moreover, AraGen sets a global example by demonstrating how AI benchmarks can prioritize equity and inclusion for underrepresented languages. It’s a step toward ensuring no language or culture is left behind in the AI revolution.”
The Leaderboard delivers detailed performance insights, enabling organizations to confidently select models that align with their requirements. By reducing the need for extensive internal testing, AraGen ensures cost- effectiveness for organizations through a more suitable metric for LLM evaluation, while strengthening trust through its transparent and reproducible methodology.
For more information about the AraGen Leaderboard and submission guidelines, visit https://huggingface.co/blog/leaderboard-3c3h-aragen