Optimum CLI Commands. Compress, Quantize and Convert!
Write your prompt and the AI will make it better!
Generate and translate text using language models
Interact with a 360M parameter language model
Generate a mystical tarot card reading
Transcribe audio or YouTube videos
Generate text responses from prompts
Create and run Jupyter notebooks interactively
View how beam search decoding works, in detail!
Generate SQL queries from text descriptions
Generate detailed script for podcast or lecture from text input
The Optimum-CLI-Tool is a command-line interface designed to optimize machine learning models through compression, quantization, and conversion. It streamlines the process of preparing models for deployment, focusing on efficiency and performance. This tool is particularly useful for users working with Text Generation tasks and aims to simplify model optimization workflows.
optimum-cli convert --input-model your_model.pb --output-model optimized_model.xml
What is model quantization?
Quantization reduces the numerical precision of model weights, decreasing model size and improving inference speed without significant loss in accuracy.
Which frameworks does Optimum-CLI-Tool support?
The tool supports TensorFlow, PyTorch, and OpenVINO, allowing seamless conversion between these formats.
How do I convert a model to OpenVINO format?
Run the tool with the conversion option, specifying the input model and desired output format. For example:
optimum-cli convert --input-model your_model.pb --output-model optimized_model.xml --target-framework openvino