Umeå University's logo

umu.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Kidane, Lidia
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Townend, Paul
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Metsch, Thijs
    Intel Corporation, Intel Corporation, Neubiberg, DE.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Automated hyperparameter tuning for adaptive cloud workload prediction2023In: UCC '23: Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing, New York: Association for Computing Machinery (ACM), 2023Conference paper (Refereed)
    Abstract [en]

    Efficient workload prediction is essential for enabling timely resource provisioning in cloud computing environments. However, achieving accurate predictions, ensuring adaptability to changing conditions, and minimizing computation overhead pose significant challenges for workload prediction models. Furthermore, the continuous streaming nature of workload metrics requires careful consideration when applying machine learning and data mining algorithms, as manual hyperparameter optimization can be time-consuming and suboptimal. We propose an automated parameter tuning and adaptation approach for workload prediction models and concept drift detection algorithms utilized in predicting future workload. Our method leverages a pre-built knowledge-base based on historical data statistical features, enabling automatic adjustment of model weights and concept drift detection parameters. Additionally, model adaptation is facilitated through a transfer learning approach. We evaluate the effectiveness of our automated approach by comparing it with static approaches using synthetic and real-world datasets. By automating the parameter tuning process and integrating concept drift detection, in our experiments the proposed method enhances the accuracy and efficiency of workload prediction models by 50%.

    Download full text (pdf)
    fulltext
  • 2.
    Kidane, Lidia
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Townend, Paul
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Metsch, Thijs
    Intel Corporation, Germany.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    When and How to Retrain Machine Learning-based Cloud Management Systems2022In: 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), IEEE, 2022, p. 688-698Conference paper (Refereed)
    Abstract [en]

    Cloud management systems increasingly rely on machine learning (ML) models to predict incoming workload rates, load, and other system behaviors for efficient dynamic resource management. Current state-of-the-art prediction models demonstrate high accuracy, but assume that data patterns remain stable. However, in production use, systems may face hardware upgrades, changes in user behavior etc. that lead to concept drifts - significant changes in characteristics of data streams over time. To mitigate prediction deterioration, ML models need to be updated - but questions of when and how to best retrain these models are unsolved in the context of cloud management. We present a pilot study that address these questions for one of the most common models for adaptive prediction - Long Short Term Memory (LSTM) - using synthetic and real-world workload data. Our analysis of when to retrain explores approaches for detecting when retraining is required using both concept drift detection and prediction error thresholds, and at what point of retraining should actually take place. Our analysis of how to retrain focuses on the data required for retraining, and what proportion should be taken from before and after the need for retraining is detected. We present initial results that indicate that retraining of existing models can achieve prediction accuracy close to that of newly trained models but for much less cost, and present initial advice for how to provide cloud management systems with support for automatic retraining of ML-based methods.

1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf