Optimum CLI Commands. Compress, Quantize and Convert!
Generate detailed speaker diarization from text input💬
Smart search tool that leverages LangChain, FAISS, OpenAI.
Greet a user by name
Send queries and receive responses using Gemini models
Generate text bubbles from your input
Generate responses to text instructions
Generate lyrics in the style of any artist
Generate SQL queries from text descriptions
F3-DEMO
Interact with a 360M parameter language model
Generate detailed IKEA instructions
Get real estate guidance for your business scenarios
The Optimum-CLI-Tool is a command-line interface designed to optimize machine learning models through compression, quantization, and conversion. It streamlines the process of preparing models for deployment, focusing on efficiency and performance. This tool is particularly useful for users working with Text Generation tasks and aims to simplify model optimization workflows.
optimum-cli convert --input-model your_model.pb --output-model optimized_model.xml
What is model quantization?
Quantization reduces the numerical precision of model weights, decreasing model size and improving inference speed without significant loss in accuracy.
Which frameworks does Optimum-CLI-Tool support?
The tool supports TensorFlow, PyTorch, and OpenVINO, allowing seamless conversion between these formats.
How do I convert a model to OpenVINO format?
Run the tool with the conversion option, specifying the input model and desired output format. For example:
optimum-cli convert --input-model your_model.pb --output-model optimized_model.xml --target-framework openvino