Convert Hugging Face models to OpenVINO format
Benchmark AI models by comparison
View and submit machine learning model evaluations
Benchmark models using PyTorch and OpenVINO
Evaluate code generation with diverse feedback types
View and submit language model evaluations
Predict customer churn based on input details
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
Calculate GPU requirements for running LLMs
View and submit LLM benchmark evaluations
Find and download models from Hugging Face
View and compare language model evaluations
Display benchmark results
OpenVINO Export is a tool designed to convert models from the Hugging Face ecosystem into the OpenVINO format. OpenVINO (Open Visual Inference and Neural Network Optimization) is an open-source toolkit developed by Intel for optimizing and deploying AI inference. By exporting models to OpenVINO format, users can leverage OpenVINO's optimizations for improved performance on Intel hardware.
• Model Conversion: Converts Hugging Face models to OpenVINO format for compatibility with OpenVINO inference engines. • Hardware Optimization: Enables optimized inference on Intel CPUs, GPUs, and other accelerators. • Model Compatibility: Supports a wide range of Hugging Face models, including popular architectures like BERT, ResNet, and more. • Performance Enhancements: Takes advantage of OpenVINO's graph optimizations for faster and more efficient inference.
# Example code snippet
from openvino.export import export_to_openvino
model = AutoModel.from_pretrained("bert-base-uncased")
export_to_openvino(model, "bert-base-uncased-openvino")
What models are supported by OpenVINO Export?
OpenVINO Export supports a wide range of models from the Hugging Face ecosystem, including transformer-based models, convolutional neural networks, and more. However, compatibility depends on the model architecture and its support in OpenVINO.
Will converting my model to OpenVINO improve performance?
Yes, OpenVINO optimizations can significantly improve inference performance on Intel hardware. The exact performance gain depends on the model, hardware, and optimization settings.
How do I troubleshoot issues during model conversion?
Check the OpenVINO Export logs for error messages, ensure the model is supported, and verify that your OpenVINO installation is up-to-date. You can also refer to the official OpenVINO documentation and community forums for additional guidance.