Convert PaddleOCR models to ONNX format
Measure over-refusal in LLMs using OR-Bench
Evaluate RAG systems with visual analytics
Evaluate LLM over-refusal rates with OR-Bench
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Convert Hugging Face model repo to Safetensors
View NSQL Scores for Models
Explore GenAI model efficiency on ML.ENERGY leaderboard
Browse and submit model evaluations in LLM benchmarks
View and submit LLM benchmark evaluations
Browse and submit evaluations for CaselawQA benchmarks
Merge Lora adapters with a base model
Display genomic embedding leaderboard
PaddleOCRModelConverter is a tool designed to convert PaddleOCR models into the ONNX format. ONNX (Open Neural Network Exchange) is an open standard that allows models to be transferred between different frameworks and platforms, enabling better interoperability and performance optimization. This tool is particularly useful for users who want to deploy PaddleOCR models in environments that support ONNX, such as.TensorRT, Core ML, or Edge Inference Engines.
• Model Conversion: Converts PaddleOCR models to ONNX format for cross-platform compatibility.
• Optimized Inference: Supports optimization of models for inference, ensuring faster and more efficient deployment.
• Framework Compatibility: Facilitates deployment across multiple ML frameworks and platforms.
• Command-Line Interface: Provides an easy-to-use command-line tool for model conversion.
• Cross-Platform Support: Enables deployment on diverse operating systems and hardware configurations.
paddle_ocr_model_converter --input_model path/to/model --output_path path/to/output
What models are supported by PaddleOCRModelConverter?
PaddleOCRModelConverter supports all standard PaddleOCR models, including but not limited to CRNN, Transformer, and PFAN models.
Why should I convert my PaddleOCR model to ONNX?
Converting to ONNX enables deployment in ONNX-compatible frameworks and platforms, which can improve inference performance and provide better interoperability.
Are there any specific dependencies required for the conversion?
Yes, ensure you have the latest versions of PaddlePaddle and ONNX runtime installed in your environment for smooth conversion and inference.