Recent advances in deep learning have made available large, powerful convolutional neural networks (CNN) with state-of-the-art performance in several real-world applications. Unfortunately, these large-sized models have millions of parameters, thus they are not deployable on resource-limited platforms (e.g., where RAM is limited). Compression of CNNs becomes therefore a critical problem to achieve memory-efficient and possibly computationally faster model representations. In this paper, we investigate the impact of lossy compression of CNNs by weight pruning and quantization, and lossless weight matrix representations based on source coding. We tested several combinations of these techniques on four benchmark datasets for classification and regression problems, achieving compression rates up to 165 times, while preserving or improving the model performance.

Compression strategies and space-conscious representations for deep neural networks / G.C. Marino, G. Ghidoli, M. Frasca, D. Malchiodi (INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION). - In: 2020 25th International Conference on Pattern Recognition (ICPR)[s.l] : IEEE, 2020. - ISBN 978-1-7281-8808-9. - pp. 9835-9842 (( Intervento presentato al 25. convegno International Conference on Pattern Recognition, ICPR 2020 tenutosi a Milano nel 2021 [10.1109/ICPR48806.2021.9412209].

Compression strategies and space-conscious representations for deep neural networks

M. Frasca
;
D. Malchiodi
2020

Abstract

Recent advances in deep learning have made available large, powerful convolutional neural networks (CNN) with state-of-the-art performance in several real-world applications. Unfortunately, these large-sized models have millions of parameters, thus they are not deployable on resource-limited platforms (e.g., where RAM is limited). Compression of CNNs becomes therefore a critical problem to achieve memory-efficient and possibly computationally faster model representations. In this paper, we investigate the impact of lossy compression of CNNs by weight pruning and quantization, and lossless weight matrix representations based on source coding. We tested several combinations of these techniques on four benchmark datasets for classification and regression problems, achieving compression rates up to 165 times, while preserving or improving the model performance.
CNN compression; Drug-target prediction; Entropy coding; Probabilistic quantization; Weight pruning
Settore INF/01 - Informatica
   Multi-criteria optimized data structures: from compressed indexes to learned indexes, and beyond
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2017WR7SHH_004
2020
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
ICPR-published.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 337.88 kB
Formato Adobe PDF
337.88 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/876038
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact