Convert Hugging Face models to OpenVINO format
Generate leaderboard comparing DNA models
Explore and visualize diverse models
Create demo spaces for models on Hugging Face
Search for model performance across languages and benchmarks
Benchmark LLMs in accuracy and translation across languages
Teach, test, evaluate language models with MTEB Arena
Pergel: A Unified Benchmark for Evaluating Turkish LLMs
Browse and submit LLM evaluations
Convert a Stable Diffusion XL checkpoint to Diffusers and open a PR
View and compare language model evaluations
Generate and view leaderboard for LLM evaluations
Browse and submit model evaluations in LLM benchmarks
OpenVINO Export is a tool designed to convert models from the Hugging Face ecosystem into the OpenVINO format. OpenVINO (Open Visual Inference and Neural Network Optimization) is an open-source toolkit developed by Intel for optimizing and deploying AI inference. By exporting models to OpenVINO format, users can leverage OpenVINO's optimizations for improved performance on Intel hardware.
• Model Conversion: Converts Hugging Face models to OpenVINO format for compatibility with OpenVINO inference engines. • Hardware Optimization: Enables optimized inference on Intel CPUs, GPUs, and other accelerators. • Model Compatibility: Supports a wide range of Hugging Face models, including popular architectures like BERT, ResNet, and more. • Performance Enhancements: Takes advantage of OpenVINO's graph optimizations for faster and more efficient inference.
# Example code snippet
from openvino.export import export_to_openvino
model = AutoModel.from_pretrained("bert-base-uncased")
export_to_openvino(model, "bert-base-uncased-openvino")
What models are supported by OpenVINO Export?
OpenVINO Export supports a wide range of models from the Hugging Face ecosystem, including transformer-based models, convolutional neural networks, and more. However, compatibility depends on the model architecture and its support in OpenVINO.
Will converting my model to OpenVINO improve performance?
Yes, OpenVINO optimizations can significantly improve inference performance on Intel hardware. The exact performance gain depends on the model, hardware, and optimization settings.
How do I troubleshoot issues during model conversion?
Check the OpenVINO Export logs for error messages, ensure the model is supported, and verify that your OpenVINO installation is up-to-date. You can also refer to the official OpenVINO documentation and community forums for additional guidance.