Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Benchmarking the Linear Algebra Awareness of TensorFlow and PyTorch
Rwth, Aachen University, Germany.
Rwth, Aachen University, Germany.
Rwth, Aachen University, Germany.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).ORCID iD: 0000-0002-4972-7097
2022 (English)In: Proceedings: 2022 IEEE 36th International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022, IEEE, 2022, p. 924-933Conference paper, Published paper (Refereed)
Abstract [en]

Linear algebra operations, which are ubiquitous in machine learning, form major performance bottlenecks. The High-Performance Computing community invests significant effort in the development of architecture-specific optimized kernels, such as those provided by the BLAS and LAPACK libraries, to speed up linear algebra operations. However, end users are progressively less likely to go through the error prone and time-consuming process of directly using said kernels; instead, frameworks such as TensorFlow (TF) and PyTorch (PyT), which facilitate the development of machine learning applications, are becoming more and more popular. Although such frameworks link to BLAS and LAPACK, it is not clear whether or not they make use of linear algebra knowledge to speed up computations. For this reason, in this paper we develop benchmarks to investigate the linear algebra optimization capabilities of TF and PyT. Our analyses reveal that a number of linear algebra optimizations are still missing; for instance, reducing the number of scalar operations by applying the distributive law, and automatically identifying the optimal parenthesization of a matrix chain. In this work, we focus on linear algebra computations in TF and PyT; we both expose opportunities for performance enhancement to the benefit of the developers of the frameworks and provide end users with guidelines on how to achieve performance gains.

Place, publisher, year, edition, pages
IEEE, 2022. p. 924-933
Keywords [en]
Linear Algebra, Machine Learning, Performance analysis
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-199008DOI: 10.1109/IPDPSW55747.2022.00150ISI: 000855041000111Scopus ID: 2-s2.0-85136223573ISBN: 9781665497473 (electronic)ISBN: 9781665497480 (print)OAI: oai:DiVA.org:umu-199008DiVA, id: diva2:1692576
Conference
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022, Lyon, France, 30 May - 03 June 2022
Available from: 2022-09-02 Created: 2022-09-02 Last updated: 2023-11-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bientinesi, Paolo

Search in DiVA

By author/editor
Bientinesi, Paolo
By organisation
Department of Computing ScienceHigh Performance Computing Center North (HPC2N)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 235 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf