Convert Hugging Face models to OpenVINO format
Create and manage ML pipelines with ZenML Dashboard
Evaluate LLM over-refusal rates with OR-Bench
Determine GPU requirements for large language models
Browse and filter machine learning models by category and modality
Optimize and train foundation models using IBM's FMS
Generate leaderboard comparing DNA models
Display leaderboard of language model evaluations
Convert and upload model files for Stable Diffusion
Quantize a model for faster inference
Explore and manage STM32 ML models with the STM32AI Model Zoo dashboard
SolidityBench Leaderboard
Benchmark models using PyTorch and OpenVINO
OpenVINO Export is a tool designed to convert models from the Hugging Face ecosystem into the OpenVINO format. OpenVINO (Open Visual Inference and Neural Network Optimization) is an open-source toolkit developed by Intel for optimizing and deploying AI inference. By exporting models to OpenVINO format, users can leverage OpenVINO's optimizations for improved performance on Intel hardware.
• Model Conversion: Converts Hugging Face models to OpenVINO format for compatibility with OpenVINO inference engines. • Hardware Optimization: Enables optimized inference on Intel CPUs, GPUs, and other accelerators. • Model Compatibility: Supports a wide range of Hugging Face models, including popular architectures like BERT, ResNet, and more. • Performance Enhancements: Takes advantage of OpenVINO's graph optimizations for faster and more efficient inference.
# Example code snippet
from openvino.export import export_to_openvino
model = AutoModel.from_pretrained("bert-base-uncased")
export_to_openvino(model, "bert-base-uncased-openvino")
What models are supported by OpenVINO Export?
OpenVINO Export supports a wide range of models from the Hugging Face ecosystem, including transformer-based models, convolutional neural networks, and more. However, compatibility depends on the model architecture and its support in OpenVINO.
Will converting my model to OpenVINO improve performance?
Yes, OpenVINO optimizations can significantly improve inference performance on Intel hardware. The exact performance gain depends on the model, hardware, and optimization settings.
How do I troubleshoot issues during model conversion?
Check the OpenVINO Export logs for error messages, ensure the model is supported, and verify that your OpenVINO installation is up-to-date. You can also refer to the official OpenVINO documentation and community forums for additional guidance.