Skip to content

Model Compression

Quantization Reducing precision from Float64 to Int8
Pruning Removing unnecessary aspects of the model
Removing neurons in ANN
Last updated: 2024-01-24 • Contributors: AhmedThahir,

Comments