Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Compressing the Activation Maps in Deep Neural Networks and Its Regularizing Effect
Umeå University, Faculty of Medicine, Department of Radiation Sciences.ORCID iD: 0000-0002-2391-1419
Umeå University, Faculty of Medicine, Department of Radiation Sciences.
Umeå University, Faculty of Medicine, Department of Radiation Sciences.
Umeå University, Faculty of Science and Technology, Department of Computing Science.
2024 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856Article in journal (Refereed) Published
Abstract [en]

Deep learning has dramatically improved performance in various image analysis applications in the last few years. However, recent deep learning architectures can be very large, with up to hundreds of layers and millions or even billions of model parameters that are impossible to fit into commodity graphics processing units. We propose a novel approach for compressing high-dimensional activation maps, the most memory-consuming part when training modern deep learning architectures. The proposed method can be used to compress the feature maps of a single layer, multiple layers, or the entire network according to specific needs. To this end, we also evaluated three different methods to compress the activation maps: Wavelet Transform, Discrete Cosine Transform, and Simple Thresholding. We performed experiments in two classification tasks for natural images and two semantic segmentation tasks for medical images. Using the proposed method, we could reduce the memory usage for activation maps by up to 95%. Additionally, we show that the proposed method induces a regularization effect that acts on the layer weight gradients. Code is available at https://github.com/vuhoangminh/Compressing-the-Activation-Maps-in-DNNs.

Place, publisher, year, edition, pages
2024.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-236503Scopus ID: 2-s2.0-85219517351OAI: oai:DiVA.org:umu-236503DiVA, id: diva2:1945240
Available from: 2025-03-18 Created: 2025-03-18 Last updated: 2025-03-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusPaper at OpenReview.net

Authority records

Vu, Minh Hoang

Search in DiVA

By author/editor
Vu, Minh HoangGarpebring, AndersNyholm, TufveLöfstedt, Tommy
By organisation
Department of Radiation SciencesDepartment of Computing Science
In the same journal
Transactions on Machine Learning Research
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 24 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf