View and submit LLM benchmark evaluations
Multilingual Text Embedding Model Pruner
Browse and evaluate ML tasks in MLIP Arena
Compare audio representation models using benchmark results
Submit models for evaluation and view leaderboard
Browse and filter machine learning models by category and modality
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Measure over-refusal in LLMs using OR-Bench
Calculate survival probability based on passenger details
Determine GPU requirements for large language models
Display benchmark results
View RL Benchmark Reports
Benchmark LLMs in accuracy and translation across languages
Aiera Finance Leaderboard is a model benchmarking tool designed to provide insights into the performance of large language models (LLMs) within the financial domain. It enables users to view and submit evaluations of various LLMs, fostering transparency and community-driven insights into AI performance in finance.
What is the purpose of Aiera Finance Leaderboard?
Aiera Finance Leaderboard is designed to help users understand and compare the performance of different LLMs in financial contexts, enabling better decision-making for AI adoption.
Can anyone submit evaluations to the leaderboard?
Yes, the platform allows users to submit their own evaluations, contributing to a community-driven benchmarking process.
How often are the rankings updated?
The rankings are updated in real-time as new evaluations are submitted, ensuring the most current and accurate representation of LLM performance.