quantize¶
Quantize a trained model to reduce its memory footprint.
Additional Documentation¶
Usage¶
Usage: yzlite quantize [OPTIONS] <model>
Quantize a model into a .tflite file
The model is automatically quantized after training completes.
This is useful if the yzlite_model.tflite_converter parameters
are modified after the model is trained.
For more details see:
https://github.com/ReRAM-Labs/yzlite/docs/guides/model_quantization
----------
Examples
----------
# Quantize the previously trained model
# and update its associated model archive audio_example1.yzlite.zip
# with the generated .tflite model file
yzlite quantize audio_example1
# Generate a .tflite in the current directory from the given model archive
yzlite quantize audio_example1.yzlite.zip --output .
# Generate a .tflite from the given model python script
# The .tflite is generated in the same directory as the Python script
yzlite quantize my_model.py --build
Arguments
* model <model> One of the following: [default: None] [required]
- Name of YZLITE model
- Path to trained model's archive (.yzlite.zip)
- Path to YZLITE model's python script
Options
--verbose -v Enable verbose console logs
--output -o <path> One of the following: [default: None]
- Path to generated output .tflite file
- Directory where output .tflite is generated
- If omitted, .tflite is generated in the YZLITE model's log directory and the model archive is updated
--build -b Build the Keras model rather than loading from a pre-trained .h5 file in the YZLITE model's archive.
This is useful if a .tflite needs to be generated to view its structure
--weights -w <value> Optional, load weights from previous training session. [default: None]
May be one of the following:
- If option omitted then quantize using output .h5 from training
- Absolute path to a generated weights .h5 file generated by Keras during training
- The keyword `best`; find the best weights in <model log dir>/train/weights
- Filename of .h5 in <model log dir>/train/weights
--update-archive --no-update-archive Update the model archive with the quantized model [default: no-update-archive]
--help Show this message and exit.