Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Compressing the Activation Maps in Deep Neural Networks and Its Regularizing Effect
Umeå universitet, Medicinska fakulteten, Institutionen för strålningsvetenskaper.ORCID-id: 0000-0002-2391-1419
Umeå universitet, Medicinska fakulteten, Institutionen för strålningsvetenskaper.
Umeå universitet, Medicinska fakulteten, Institutionen för strålningsvetenskaper.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
2024 (Engelska)Ingår i: Transactions on Machine Learning Research, E-ISSN 2835-8856Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Deep learning has dramatically improved performance in various image analysis applications in the last few years. However, recent deep learning architectures can be very large, with up to hundreds of layers and millions or even billions of model parameters that are impossible to fit into commodity graphics processing units. We propose a novel approach for compressing high-dimensional activation maps, the most memory-consuming part when training modern deep learning architectures. The proposed method can be used to compress the feature maps of a single layer, multiple layers, or the entire network according to specific needs. To this end, we also evaluated three different methods to compress the activation maps: Wavelet Transform, Discrete Cosine Transform, and Simple Thresholding. We performed experiments in two classification tasks for natural images and two semantic segmentation tasks for medical images. Using the proposed method, we could reduce the memory usage for activation maps by up to 95%. Additionally, we show that the proposed method induces a regularization effect that acts on the layer weight gradients. Code is available at https://github.com/vuhoangminh/Compressing-the-Activation-Maps-in-DNNs.

Ort, förlag, år, upplaga, sidor
2024.
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-236503Scopus ID: 2-s2.0-85219517351OAI: oai:DiVA.org:umu-236503DiVA, id: diva2:1945240
Tillgänglig från: 2025-03-18 Skapad: 2025-03-18 Senast uppdaterad: 2025-03-18Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

ScopusPaper at OpenReview.net

Person

Vu, Minh Hoang

Sök vidare i DiVA

Av författaren/redaktören
Vu, Minh HoangGarpebring, AndersNyholm, TufveLöfstedt, Tommy
Av organisationen
Institutionen för strålningsvetenskaperInstitutionen för datavetenskap
I samma tidskrift
Transactions on Machine Learning Research
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 111 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf