Evaluate model accuracy using Fbeta score
Browse and filter ML model leaderboard data
Export Hugging Face models to ONNX
Persian Text Embedding Benchmark
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Find and download models from Hugging Face
SolidityBench Leaderboard
Calculate memory usage for LLM models
View and submit LLM benchmark evaluations
Display model benchmark results
View LLM Performance Leaderboard
Download a TriplaneGaussian model checkpoint
Upload a machine learning model to Hugging Face Hub
FBeta_Score is a tool designed for model benchmarking that evaluates the accuracy of classification models using the Fbeta score. The Fbeta score is a measure that combines precision and recall into a single metric, allowing for a balanced evaluation of model performance. It is particularly useful for assessing models when there is an imbalance in data classes or when one is more interested in either precision or recall.
1. What is the Fbeta score?
The Fbeta score is a metric that combines precision and recall, with a parameter beta that weights their importance. A beta value greater than 1 emphasizes recall, while a value less than 1 emphasizes precision.
2. When should I use a specific beta value?
Choose a beta value based on your problem's requirements. For example, if recall is more critical (e.g., detecting rare events), use beta > 1. If precision matters more (e.g., avoiding false positives), use beta < 1.
3. Does FBeta_Score support multi-class classification?
Yes, FBeta_Score can handle multi-class classification problems by computing scores for each class or providing an overall score.