umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Hybrid Resource Management for HPC and Data Intensive Workloads
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. (Distributed Systems Group)
PDC Center for High Performance Computing, KTH Royal Institute of Technology, Sweden.
PDC Center for High Performance Computing, KTH Royal Institute of Technology, Sweden.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. (Distributed Systems Group)
2019 (Engelska)Ingår i: 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Los Alamitos: IEEE Computer Society, 2019, s. 399-409Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Traditionally, High Performance Computing (HPC) and Data Intensive (DI) workloads have been executed on separate hardware using different tools for resource and application management. With increasing convergence of these paradigms, where modern applications are composed of both types of jobs in complex workflows, this separation becomes a growing overhead and the need for a common computation platform for both application areas increases. Executing both application classes on the same hardware not only enables hybrid workflows, but can also increase the usage efficiency of the system, as often not all available hardware is fully utilized by an application. While HPC systems are typically managed in a coarse grained fashion, allocating a fixed set of resources exclusively to an application, DI systems employ a finer grained regime, enabling dynamic resource allocation and control based on application needs. On the path to full convergence, a useful and less intrusive step is a hybrid resource management system that allows the execution of DI applications on top of standard HPC scheduling systems.In this paper we present the architecture of a hybrid system enabling dual-level scheduling for DI jobs in HPC infrastructures. Our system takes advantage of real-time resource utilization monitoring to efficiently co-schedule HPC and DI applications. The architecture is easily adaptable and extensible to current and new types of distributed workloads, allowing efficient combination of hybrid workloads on HPC resources with increased job throughput and higher overall resource utilization. The architecture is implemented based on the Slurm and Mesos resource managers for HPC and DI jobs. Our experimental evaluation in a real cluster based on a set of representative HPC and DI applications demonstrate that our hybrid architecture improves resource utilization by 20%, with 12% decrease on queue makespan while still meeting all deadlines for HPC jobs.

Ort, förlag, år, upplaga, sidor
Los Alamitos: IEEE Computer Society, 2019. s. 399-409
Nyckelord [en]
Resource Management, High Performance Computing, Data Intensive Computing, Mesos, Slurm, Boostrapping
Nationell ämneskategori
Datorsystem
Forskningsämne
datalogi
Identifikatorer
URN: urn:nbn:se:umu:diva-155619DOI: 10.1109/CCGRID.2019.00054ISBN: 978-1-7281-0913-8 (tryckt)ISBN: 978-1-7281-0912-1 (digital)OAI: oai:DiVA.org:umu-155619DiVA, id: diva2:1282469
Konferens
CCGrid2019, 14th - 17th of May Larnaca, Cyprus
Anmärkning

Originally included in thesis in manuscript form

Tillgänglig från: 2019-01-24 Skapad: 2019-01-24 Senast uppdaterad: 2019-11-18Bibliografiskt granskad
Ingår i avhandling
1. Application-aware resource management for datacenters
Öppna denna publikation i ny flik eller fönster >>Application-aware resource management for datacenters
2018 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Applikationsmedveten resurshantering för datacenter
Abstract [en]

High Performance Computing (HPC) and Cloud Computing datacenters are extensively used to steer and solve complex problems in science, engineering, and business, such as calculating correlations and making predictions. Already in a single datacenter server, there are thousands of hardware and software metrics – Key Performance Indicators (KPIs) – that individually and aggregated can give insight in the performance, robustness, and efficiency of the datacenter and the provisioned applications. At the datacenter level, the number of KPIs is even higher. The fast growing interest on datacenter management from both public and industry together with the rapid expansion in scale and complexity of datacenter resources and the services being provided on them have made monitoring, profiling, controlling, and provisioning compute resources dynamically at runtime into a challenging and complex task. Commonly, correlations of application KPIs, like response time and throughput, with resource capacities show that runtime systems (e.g., containers or virtual machines) that are used to provision these applications do not utilize available resources efficiently. This reduces datacenter efficiency, which in term results in higher operational costs and longer waiting times for results.

The goal of this thesis is to develop tools and autonomic techniques for improving datacenter operations, management and utilization, while improving and/or minimizing impacts on applications performance. To this end, we make use of application resource descriptors to create a library that dynamically adjusts the amount of resources used, enabling elasticity for scientific workflows in HPC datacenters. For mission critical applications, high availability is of great concern since these services must be kept running even in the event of system failures. By modeling and correlating specific resource counters, like CPU, memory and network utilization, with the number of runtime synchronizations, we present adaptive mechanisms to dynamically select which fault tolerant mechanism to use. Likewise, for scientific applications we propose a hybrid extensible architecture for dual-level scheduling of data intensive jobs in HPC infrastructures, allowing operational simplification, on-boarding of new types of applications and achieving greater job throughput with higher overall datacenter efficiency.

Ort, förlag, år, upplaga, sidor
Umeå: Department of computing science, Umeå university, 2018. s. 28
Serie
Report / UMINF, ISSN 0348-0542 ; 18.14
Nyckelord
Resource Management, High Performance Computing, Cloud Computing
Nationell ämneskategori
Datorsystem
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-155620 (URN)978-91-7601-971-9 (ISBN)
Presentation
2018-12-12, MA121, MIT-Huset, Umeå, 20:31 (Engelska)
Opponent
Handledare
Tillgänglig från: 2019-01-25 Skapad: 2019-01-24 Senast uppdaterad: 2019-02-04Bibliografiskt granskad

Open Access i DiVA

fulltext(987 kB)14 nedladdningar
Filinformation
Filnamn FULLTEXT02.pdfFilstorlek 987 kBChecksumma SHA-512
fda85a01694231808cc74df461367b2e58efb7b6bf2cc17257c32b2bbd53408db732c321481ba379f73bde856b3ba160460fca570d77e837dcfda65008bccb7e
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltext

Personposter BETA

Souza, AbelTordsson, Johan

Sök vidare i DiVA

Av författaren/redaktören
Souza, AbelTordsson, Johan
Av organisationen
Institutionen för datavetenskap
Datorsystem

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 349 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 806 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf