Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Accelerating Deep Learning Inference in Constrained Embedded Devices Using Hardware Loops and a Dot Product Unit
Show others and affiliations
2020 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 165913-165926Article in journal (Refereed) Published
Abstract [en]

Deep learning algorithms have seen success in a wide variety of applications, such as machine translation, image and speech recognition, and self-driving cars. However, these algorithms have only recently gained a foothold in the embedded systems domain. Most embedded systems are based on cheap microcontrollers with limited memory capacity, and, thus, are typically seen as not capable of running deep learning algorithms. Nevertheless, we consider that advancements in compression of neural networks and neural network architecture, coupled with an optimized instruction set architecture, could make microcontroller-grade processors suitable for specific low-intensity deep learning applications. We propose a simple instruction set extension with two main components-hardware loops and dot product instructions. To evaluate the effectiveness of the extension, we developed optimized assembly functions for the fully connected and convolutional neural network layers. When using the extensions and the optimized assembly functions, we achieve an average clock cycle count decrease of 73% for a small scale convolutional neural network. On a per layer base, our optimizations decrease the clock cycle count for fully connected layers and convolutional layers by 72% and 78%, respectively. The average energy consumption per inference decreases by 73%. We have shown that adding just hardware loops and dot product instructions has a significant positive effect on processor efficiency in computing neural network functions.

Place, publisher, year, edition, pages
IEEE, 2020. Vol. 8, p. 165913-165926
Keywords [en]
Hardware, Machine learning, Biological neural networks, Instruction sets, Microcontrollers, Embedded systems, Deep learning, instruction set optimization, RISC-V
National Category
Computer Engineering Computer Systems Embedded Systems
Identifiers
URN: urn:nbn:se:umu:diva-175863DOI: 10.1109/ACCESS.2020.3022824ISI: 000572879300001Scopus ID: 2-s2.0-85100209752OAI: oai:DiVA.org:umu-175863DiVA, id: diva2:1476191
Available from: 2020-10-14 Created: 2020-10-14 Last updated: 2023-11-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bientinesi, Paolo

Search in DiVA

By author/editor
Bientinesi, Paolo
By organisation
Department of Computing Science
In the same journal
IEEE Access
Computer EngineeringComputer SystemsEmbedded Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 383 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf