Skip to content

Model Compression

Quantization Reducing precision from Float64 to Int8
Pruning Removing unnecessary aspects of the model
Removing neurons in ANN
Last updated: 2024-05-12 • Contributors: AhmedThahir,

Comments