umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
PEAS: A Performance Evaluation framework for Auto-Scaling strategies in cloud applications
Lund University.
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Distributed Systems)
Lund University.
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Distributed Systems)
Show others and affiliations
2015 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Numerous auto-scaling strategies have been proposed in the last few years for improving various Quality of Service (QoS)indicators of cloud applications, e.g., response time and throughput, by adapting the amount of resources assigned to theapplication to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved throughexperiments under specific conditions, and seldom includes extensive testing to account for uncertainties in the workloads, andunexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in generalconditions. In this paper, we present PEAS, a Performance Evaluation framework for Auto-Scaling strategies in the presenceof uncertainties. The evaluation is formulated as a chance constrained optimization problem, which is solved using scenariotheory. The adoption of such a technique allows one to give probabilistic guarantees of the obtainable performance. Six differentauto-scaling strategies have been selected from the literature for extensive test evaluation, and compared using the proposedframework. We build a discrete event simulator and parameterize it based on real experiments. Using the simulator, each auto-scaler’s performance is evaluated using 796 distinct real workload traces from projects hosted on the Wikimedia foundations’servers, and their performance is compared using PEAS. The evaluation is carried out using different performance metrics,highlighting the flexibility of the framework, while providing probabilistic bounds on the evaluation and the performance of thealgorithms. Our results highlight the problem of generalizing the conclusions of the original published studies and show thatbased on the evaluation criteria, a controller can be shown to be better than other controllers.

Place, publisher, year, edition, pages
2015.
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:umu:diva-108394OAI: oai:DiVA.org:umu-108394DiVA: diva2:852768
Funder
Swedish Research CouncilEU, European Research Council
Note

Submitted

Available from: 2015-09-10 Created: 2015-09-10 Last updated: 2015-09-24
In thesis
1. Workload characterization, controller design and performance evaluation for cloud capacity autoscaling
Open this publication in new window or tab >>Workload characterization, controller design and performance evaluation for cloud capacity autoscaling
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2015. 16 p.
Series
Report / UMINF, ISSN 0348-0542 ; 15.09
Keyword
cloud computing, autoscaling, workloads, performance modeling, controller design
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-108398 (URN)978-91-7601-330-4 (ISBN)
Public defence
2015-10-02, N360, Naturveterhuset Building, Umeå University, Umeå, 14:00 (English)
Opponent
Supervisors
Funder
EU, European Research CouncilSwedish Research Council
Available from: 2015-09-11 Created: 2015-09-10 Last updated: 2015-10-07Bibliographically approved

Open Access in DiVA

No full text

Search in DiVA

By author/editor
Ali-Eldin, AhmedTordsson, JohanElmroth, Erik
By organisation
Department of Computing Science
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 1767 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf