umu.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Scheduling of parallel matrix computations and data layout conversion for HPC and Multi-Core Architectures
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).ORCID-id: 0000-0002-4675-7434
2011 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Dense linear algebra represents fundamental building blocks in many computational science and engineering applications. The dense linear algebra algorithms must be numerically stable, robust, and reliable in order to be usable as black-box solvers by expert as well as non-expert users. The algorithms also need to scale and run efficiently on massively parallel computers with multi-core nodes. Developing high-performance algorithms for dense matrix computations is a challenging task, especially since the widespread adoption of multi-core architectures. Cache reuse is an even more critical issue on multi-core processors than on uni-core processors due to their larger computational power and more complex memory hierarchies. Blocked matrix storage formats, in which blocks of the matrix are stored contiguously, and blocked algorithms, in which the algorithms exhibit large amounts of cache reuse, remain key techniques in the effort to approach the theoretical peak performance.

In Paper I, we present a packed and distributed Cholesky factorization algorithm based on a new blocked and packed matrix storage format. High performance node computations are obtained as a result of the blocked storage format, and the use of look-ahead leads to improved parallel efficiency. In Paper II and Paper III, we study the problem of in-place matrix transposition in general and in-place matrix storage format conversion in particular. We present and evaluate new high-performance parallel algorithms for in-place conversion between the standard column-major and row-major formats and the four standard blocked matrix storage formats. Another critical issue, besides cache reuse, is that of efficient scheduling of computational tasks. Many weakly scalable parallel algorithms are efficient only when the problem size per processor is relatively large. A current research trend focuses on developing parallel algorithms which are more strongly scalable and hence more efficient also for smaller problems.

In Paper IV, we present a framework for dynamic node-scheduling of two-sided matrix computations and demonstrate that by using priority-based scheduling one can obtain an efficient scheduling of a QR sweep. In Paper V and Paper VI, we present a blocked implementation of two-stage Hessenberg reduction targeting multi-core architectures. The main contributions of Paper V are in the blocking and scheduling of the second stage. Specifically, we show that the concept of look-ahead can be applied also to this two-sided factorization, and we propose an adaptive load-balancing technique that allow us to schedule the operations effectively.

Abstract [sv]

Matrisberäkningar är fundamentala byggblock imånga beräkningstunga teknisk-vetenskapliga applikationer. Algoritmerna måste vara numeriskt stabila och robusta för att användaren ska kunna förlita sig på de beräknade resultaten. Algoritmerna måste dessutom skala och kunna köras effektivt på massivt parallella datorer med noder bestående av flerkärniga processorer.

Det är utmanande att uveckla högpresterande algoritmer för täta matrisberäkningar, särskilt sedan introduktionen av flerkärniga processorer. Det är ännu viktigare att återanvända data i cache-minnena i en flerkärnig processor på grund av dess höga beräkningsprestanda. Två centrala tekniker i strävan efter algoritmer med optimal prestanda är blockade algoritmer och blockade matrislagringsformat. En blockad algoritm har ett minnesåtkomstmönster som passar minneshierarkin väl. Ett blockat matrislagringsformat placerar matrisens element i minnet så att elementen i specifika matrisblock lagras konsekutivt.

I Artikel I presenteras en algoritm för Cholesky-faktorisering av en matris kompakt lagrad i ett distribuerat minne. Det nya lagringsformatet är blockat och möjliggör därigenom hög prestanda. Artikel II och Artikel III beskriver hur en konventionellt lagrad matris kan konverteras till och från ett blockat lagringsformat med hjälp av en ytterst liten mängd extra lagringsutrymme. Lösningen bygger på en ny parallell algoritm för matristransponering av rektangulära matriser.

Vid skapandet av en skalbar parallell algoritm måste man även beakta hur de olika beräkningsuppgifterna schemaläggs på ett effektivt sätt. Många så kallade svagt skalbara algoritmer är effektiva endast för relativt stora problem. En nuvarande forskningstrend är att utveckla så kallade starkt skalbara algoritmer, vilka är mer effektiva även för mindre problem.

Artikel IV introducerar ett dynamiskt schemaläggningssystem för två-sidiga matrisberäkningar. Beräkningsuppgifterna fördelas statiskt på noderna och schemaläggs sedan dynamiskt inom varje nod. Artikeln visar även hur prioritetsbaserad schemaläggning tar en tidigare ineffektiv algoritm för ett så kallat QR-svep och gör den effektiv.

Artikel V och Artikel VI presenterar nya parallella blockade algoritmer, designade för flerkärniga processorer, för en två-stegs Hessenberg-reduktion. De centrala bidragen i Artikel V utgörs av en blockad algoritm för reduktionens andra steg samt en adaptiv lastbalanseringsmetod.

sted, utgiver, år, opplag, sider
Umeå: Institutionen för datavetenskap, Umeå universitet , 2011.
Serie
Report / UMINF, ISSN 0348-0542 ; 11.04
HSV kategori
Forskningsprogram
data- och systemvetenskap
Identifikatorer
URN: urn:nbn:se:umu:diva-41224ISBN: 978-91-7459-214-6 (tryckt)OAI: oai:DiVA.org:umu-41224DiVA, id: diva2:411700
Disputas
2011-05-19, KBC-huset, KB3B1, Umeå Universitet, Umeå, 13:00 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2011-04-28 Laget: 2011-03-21 Sist oppdatert: 2018-06-08bibliografisk kontrollert
Delarbeid
1. Distributed SBP Cholesky factorization algorithms with near-optimal scheduling
Åpne denne publikasjonen i ny fane eller vindu >>Distributed SBP Cholesky factorization algorithms with near-optimal scheduling
2009 (engelsk)Inngår i: ACM Transactions on Mathematical Software, ISSN 0098-3500, E-ISSN 1557-7295, Vol. 36, nr 2, s. 11:1-11:25Artikkel i tidsskrift (Fagfellevurdert) Published
Identifikatorer
urn:nbn:se:umu:diva-24692 (URN)10.1145/1499096.1499100 (DOI)
Tilgjengelig fra: 2009-07-09 Laget: 2009-07-09 Sist oppdatert: 2018-06-08bibliografisk kontrollert
2. Blocked in-place transposition with application to storage format conversion
Åpne denne publikasjonen i ny fane eller vindu >>Blocked in-place transposition with application to storage format conversion
2009 (engelsk)Rapport (Annet vitenskapelig)
Abstract [en]

We develop a prototype library for in-place (dense) matrix storage format conversion between the canonical row and column-major formats and the four canonical block data layouts. Many of the fastest linear algebra routines operate on matrices in a block data layout. In-place storage format conversion enables support for input/output of large matrices in the canonical row and column-major formats. The library uses algorithms associated with in-place transposition as building blocks. We investigate previous work on the subject of (in-place) transposition and the most promising algorithms are implemented and evaluated. Our results indicate that the Three-Stage Algorithm which only requires a small constant amount of additional memory performs well and is easy to tune. Murray Dow’s V5 algorithm, which is a two-stage semi-in-place algorithm that requires a small amount of additional memory is sometimes a better choice. The write-allocate strategy of most cache-based computer architectures appears to be the cause of an observed performance problem for large matrices.

sted, utgiver, år, opplag, sider
Umeå: Institutionen för datavetenskap, Umeå universitet, 2009. s. 29
Serie
Report / UMINF, ISSN 0348-0542 ; 09.01
Identifikatorer
urn:nbn:se:umu:diva-41217 (URN)
Tilgjengelig fra: 2011-03-21 Laget: 2011-03-21 Sist oppdatert: 2018-06-08bibliografisk kontrollert
3. A framework for dynamic node-scheduling of two-sided blocked matrix computations
Åpne denne publikasjonen i ny fane eller vindu >>A framework for dynamic node-scheduling of two-sided blocked matrix computations
2009 (engelsk)Inngår i: Applied Parallel Computing - State of the Art in Parallel and Scientific Computing: 9th International Workshop, PARA 2008, 2009Konferansepaper, Publicerat paper (Fagfellevurdert)
Serie
Lecture Notes in Computer Science, ISSN 0302-9743
Identifikatorer
urn:nbn:se:umu:diva-24701 (URN)
Konferanse
9th International Workshop, PARA 2008
Merknad
Accepted for publishing.Tilgjengelig fra: 2009-07-10 Laget: 2009-07-10 Sist oppdatert: 2018-06-08bibliografisk kontrollert
4. Efficient reduction from block Hessenberg form to Hessenberg form using shared memory
Åpne denne publikasjonen i ny fane eller vindu >>Efficient reduction from block Hessenberg form to Hessenberg form using shared memory
2010 (engelsk)Inngår i: PARA 2010: State of the Art in Scientific and Parallel Computing, Reykjavik, June 6-9, 2010, 2010Konferansepaper, Publicerat paper (Fagfellevurdert)
Identifikatorer
urn:nbn:se:umu:diva-41219 (URN)
Konferanse
PARA 2010, Reykjavik, June 6-9, 2010
Merknad
Accepted for publishingTilgjengelig fra: 2011-03-21 Laget: 2011-03-21 Sist oppdatert: 2018-06-08bibliografisk kontrollert
5. Parallel two-stage reduction to Hessenberg form using dynamic scheduling on shared-memory architectures
Åpne denne publikasjonen i ny fane eller vindu >>Parallel two-stage reduction to Hessenberg form using dynamic scheduling on shared-memory architectures
2011 (engelsk)Inngår i: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 37, nr 12, s. 771-782Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

We consider parallel reduction of a real matrix to Hessenberg form using orthogonal transformations. Standard Hessenberg reduction algorithms reduce the columns of the matrix from left to right in either a blocked or unblocked fashion. However, the standard blocked variant performs 20% of the computations in terms of matrix vector multiplications. We show that a two-stage approach consisting of an intermediate reduction to block Hessenberg form speeds up the reduction by avoiding matrix vector multiplications. We describe and evaluate a new high-performance implementation of the two-stage approach that attains significant speedups over the one-stage approach. The key components are a dynamically scheduled implementation of Stage 1 and a blocked, adaptively load-balanced implementation of Stage 2. (C) 2011 Elsevier B.V. All rights reserved.

Emneord
Hessenberg reduction, Blocked algorithm, Parallel computing, Dynamic scheduling, High performance, Multi-core, Memory hierarchies
HSV kategori
Identifikatorer
urn:nbn:se:umu:diva-41221 (URN)10.1016/j.parco.2011.05.001 (DOI)000298522200005 ()
Tilgjengelig fra: 2011-03-21 Laget: 2011-03-21 Sist oppdatert: 2018-06-08bibliografisk kontrollert

Open Access i DiVA

fulltekst(689 kB)2645 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 689 kBChecksum SHA-512
8a167bde0b89b9c7af30aa50e42e7aa589b9a5efb6561e450d01a19b568503a410a87979ec6831d9cd0447264dc16c420c2f0b98b3b1e2bb99f5dc478dc55565
Type fulltextMimetype application/pdf

Personposter BETA

Karlsson, Lars

Søk i DiVA

Av forfatter/redaktør
Karlsson, Lars
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 2645 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 1369 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf