Umeå University's logo

umu.sePublications
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Accurate and low-overhead workload prediction for cloud management
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Autonomous Distributed Systems Lab)ORCID iD: 0000-0002-8097-1143
2025 (English)Doctoral thesis, comprehensive summary (Other academic)Alternative title
Noggrann och effektiv prediktering av last för resurshantering i datormoln (Swedish)
Abstract [en]

Cloud computing has transformed the IT landscape by offering users and orga-nizations on-demand access to computing power, storage, data processing, andmachine learning resources. Despite the benefits, cloud resource managementfaces challenges due to the heterogeneous and dynamic nature of workloads.Inefficient provisioning manifests in two critical forms: underprovisioning leadsto degraded Quality of Service (QoS) and unmet Service-Level Agreements(SLAs), while overprovisioning results in unnecessary energy consumption andhigh operational costs. With the current rise of AI and machine learning in-novations, machine learning-based workload prediction for resource provisionplays a vital role in predicting future scenarios and identifying new occurrences,enabling service providers to prepare ahead of time. However, various challengesare associated with machine learning-based workload prediction.This thesis addresses the challenges of machine learning-based workloadprediction in cloud environments, including data drift due to dynamic workloads,high computational overhead, and storage overhead. Firstly, cloud workloads aredynamic, and models trained with old historical data can become obsolete overtime. We addressed the challenge of accurate prediction and data drift by incor-porating machine learning and streaming data processing algorithms to assistadaptive prediction. Secondly, constantly training and updating deep learningmodels adds significant computational overhead to the cloud infrastructure. Weaddressed this problem by proposing a solution that incorporates a knowledgebase repository with transfer learning-based adaptation. Moreover, we exploredthe tradeoff between model accuracy and computational overhead. Finally, wepropose a data compression mechanism that leverages an autoencoder to reducestorage overhead resulting from the continuous generation of monitoring datain cloud management systems.Our findings reveal that the proposed methods have significantly improvedthe machine learning-based cloud management system. Extensive evaluationusing real-world datasets reveals that the proposed methods facilitate thecreation of accurate predictions, even in the face of ever-changing patterns incloud workloads. Moreover, the methods reduced computation overhead byleveraging existing knowledge and highlighting the tradeoff required to achievea balance between prediction accuracy and computation overhead.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2025. , p. 38
Series
Report / UMINF, ISSN 0348-0542 ; 25.09
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-238533ISBN: 978-91-8070-713-8 (electronic)ISBN: 978-91-8070-712-1 (print)OAI: oai:DiVA.org:umu-238533DiVA, id: diva2:1956902
Public defence
2025-05-30, MIT.A.121, MIT-huset, Umeå,, 13:15 (English)
Opponent
Supervisors
Available from: 2025-05-09 Created: 2025-05-07 Last updated: 2025-05-08Bibliographically approved
List of papers
1. When and How to Retrain Machine Learning-based Cloud Management Systems
Open this publication in new window or tab >>When and How to Retrain Machine Learning-based Cloud Management Systems
2022 (English)In: 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), IEEE, 2022, p. 688-698Conference paper, Published paper (Refereed)
Abstract [en]

Cloud management systems increasingly rely on machine learning (ML) models to predict incoming workload rates, load, and other system behaviors for efficient dynamic resource management. Current state-of-the-art prediction models demonstrate high accuracy, but assume that data patterns remain stable. However, in production use, systems may face hardware upgrades, changes in user behavior etc. that lead to concept drifts - significant changes in characteristics of data streams over time. To mitigate prediction deterioration, ML models need to be updated - but questions of when and how to best retrain these models are unsolved in the context of cloud management. We present a pilot study that address these questions for one of the most common models for adaptive prediction - Long Short Term Memory (LSTM) - using synthetic and real-world workload data. Our analysis of when to retrain explores approaches for detecting when retraining is required using both concept drift detection and prediction error thresholds, and at what point of retraining should actually take place. Our analysis of how to retrain focuses on the data required for retraining, and what proportion should be taken from before and after the need for retraining is detected. We present initial results that indicate that retraining of existing models can achieve prediction accuracy close to that of newly trained models but for much less cost, and present initial advice for how to provide cloud management systems with support for automatic retraining of ML-based methods.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
cloud computing, cloud workload prediction, concept drift, machine learning, time series prediction
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-198541 (URN)10.1109/IPDPSW55747.2022.00120 (DOI)000855041000086 ()2-s2.0-85136190866 (Scopus ID)9781665497473 (ISBN)9781665497480 (ISBN)
Conference
2022 IEEE International Parallel and Distributed Processing Symposium, 30 May 2022-03 June 2022, Lyon, France
Funder
Knut and Alice Wallenberg FoundationeSSENCE - An eScience Collaboration
Available from: 2022-08-09 Created: 2022-08-09 Last updated: 2025-05-07Bibliographically approved
2. Efficient retraining of machine learning algorithms in cloud management systems
Open this publication in new window or tab >>Efficient retraining of machine learning algorithms in cloud management systems
(English)Manuscript (preprint) (Other academic)
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-238555 (URN)
Available from: 2025-05-08 Created: 2025-05-08 Last updated: 2025-05-08Bibliographically approved
3. Automated hyperparameter tuning for adaptive cloud workload prediction
Open this publication in new window or tab >>Automated hyperparameter tuning for adaptive cloud workload prediction
2023 (English)In: UCC '23: Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing, New York: Association for Computing Machinery (ACM), 2023Conference paper, Published paper (Refereed)
Abstract [en]

Efficient workload prediction is essential for enabling timely resource provisioning in cloud computing environments. However, achieving accurate predictions, ensuring adaptability to changing conditions, and minimizing computation overhead pose significant challenges for workload prediction models. Furthermore, the continuous streaming nature of workload metrics requires careful consideration when applying machine learning and data mining algorithms, as manual hyperparameter optimization can be time-consuming and suboptimal. We propose an automated parameter tuning and adaptation approach for workload prediction models and concept drift detection algorithms utilized in predicting future workload. Our method leverages a pre-built knowledge-base based on historical data statistical features, enabling automatic adjustment of model weights and concept drift detection parameters. Additionally, model adaptation is facilitated through a transfer learning approach. We evaluate the effectiveness of our automated approach by comparing it with static approaches using synthetic and real-world datasets. By automating the parameter tuning process and integrating concept drift detection, in our experiments the proposed method enhances the accuracy and efficiency of workload prediction models by 50%.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2023
Keywords
Cloud computing, Hyperparameter optimization, Workload prediction, Concept drift, Data mining
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-223451 (URN)10.1145/3603166.3632244 (DOI)001211822800044 ()2-s2.0-85191659681 (Scopus ID)979-8-4007-0234-1 (ISBN)
Conference
CC '23: IEEE/ACM 16th International Conference on Utility and Cloud Computing, Taormina (Messina), Italy, December 4-7, 2023
Funder
Knut and Alice Wallenberg Foundation, 2019.0352eSSENCE - An eScience Collaboration
Available from: 2024-04-16 Created: 2024-04-16 Last updated: 2025-05-07Bibliographically approved
4. A hybrid autoencoder-LSTM framework for efficient workload prediction
Open this publication in new window or tab >>A hybrid autoencoder-LSTM framework for efficient workload prediction
(English)Manuscript (preprint) (Other academic)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:umu:diva-238556 (URN)
Available from: 2025-05-08 Created: 2025-05-08 Last updated: 2025-05-08Bibliographically approved
5. A data-driven framework for efficient and automated workload prediction in cloud computing
Open this publication in new window or tab >>A data-driven framework for efficient and automated workload prediction in cloud computing
(English)Manuscript (preprint) (Other academic)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:umu:diva-238557 (URN)
Available from: 2025-05-08 Created: 2025-05-08 Last updated: 2025-05-08Bibliographically approved

Open Access in DiVA

fulltext(2642 kB)51 downloads
File information
File name FULLTEXT01.pdfFile size 2642 kBChecksum SHA-512
89981e8efb575d76cc57614a4e3e619c5f489263bc003e514297824bfd1d855ad8f9a96a402775fe80ea6b41b9ef85d22ec333a702213ad13cfb7e928c17f121
Type fulltextMimetype application/pdf
spikblad(216 kB)29 downloads
File information
File name FULLTEXT02.pdfFile size 216 kBChecksum SHA-512
8a354a743aa52c9c0d5632ec5f1973a88d58764775d06a4b3af643b7ac943e9843eeb35a94ba8fc8462e0f085036ed0a76816bd238ac8ea6f146d412576a0cfc
Type fulltextMimetype application/pdf

Authority records

Kidane, Lidia

Search in DiVA

By author/editor
Kidane, Lidia
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 80 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1566 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf