umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 13 av 13
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Analysing, modelling and controlling power-performance tradeoffs in data center infrastructures2017Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The aim of this thesis is to analyse the power-performance tradeoffs in datacenter servers, create models that capture these tradeoffs, and propose controllers to optimise the use of data center infrastructures taking the tradeoffs into consideration. The main research problem that we investigate in this thesis is how to increase the power efficiency of data center servers taking into account the power-performance tradeoffs.

    The main cause for this research is the massive power consumption of data centers that is a concern both from the financial and environmental footprint perspectives. Irrespectively of the approaches taken to enhance data center power efficiency, substantial reductions in the power consumption of data center servers easily lead to performance degradation of hosted applications, which causes customers dissatisfaction. Therefore, it is crucial for the data center operators to understand and control the power-performance tradeoffs.

    The research methods used in this thesis include experiments on real testbeds, applying statistical methods to create power-performance models, development of various optimisation techniques to improve the energy-efficiency of servers, and simulations to evaluate proposed solutions at scale.

    As a result of the research presented in this thesis, we propose taxonomies for selected aspects of data center configurations, events, management actions, and monitored metrics. We discuss the relationships between these elements and to support the analysis present results from a set of testbed experiments.We show limitations in the applicability of various data center management actions, including Dynamic Voltage Frequency Scaling (DVFS), Running Average Power Limit (RAPL), CPU Pinning, horizontal and vertical scaling. Finally, we propose a power budgeting controller that minimizes the performance degradation while enforcing the power limits.

    The outcomes of this thesis can be used by the data center operators to improve the energy-efficiency of servers and reduce the overall power consumption with minimized performance degradation. Moreover, the software artifacts including virtual machine images, scripts, and simulator are available online.

    Future work includes further investigation of the problem of graceful performance degradation under power limits, incorporating multi-layer applications spread among several servers and load balancing controller.

  • 2.
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    May the power be with you: managing power-performance tradeoffs in cloud data centers2019Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The overall goal of the work presented in this thesis was to find ways of managing power-performance tradeoffs in cloud data centers. To this end, the relationships between the power consumption of data center servers and the performance of applications hosted in data centers are analyzed, models that capture these relationships are developed, and controllers to optimize the use of data center infrastructures are proposed.

    The studies were motivated by the massive power consumption of modern data centers, which is a matter of significant financial and environmental concern. Various strategies for improving the power efficiency of data centers have been proposed, including server consolidation, server throttling, and power budgeting. However, no matter what strategy is used to enhance data center power efficiency, substantial reductions in the power consumption of data center servers can easily degrade the performance of hosted applications, causing customer dissatisfaction. It is therefore crucial for data center operators to understand and control power-performance tradeoffs.

    The research methods used in this work include experiments on real testbeds, the application of statistical methods to create power-performance models, development of various optimization techniques to improve the power efficiency of servers, and simulations to evaluate the proposed solutions at scale.

    This thesis makes multiple contributions. First, it introduces taxonomies for various aspects of data center configuration, events, management actions, and monitored metrics. We discuss the relationships between these elements and support our analysis with results from a set of testbed experiments. We demonstrate limitations on the usefulness of various data center management actions for controlling power consumption, including Dynamic Voltage Frequency Scaling (DVFS) and Running Average Power Limit (RAPL). We also demonstrate similar limitations on common measures for controlling application performance, including variation of operating system scheduling parameters, CPU pinning, and horizontal and vertical scaling. Finally, we propose a set of power budgeting controllers that act at the application, server, and cluster levels to minimize performance degradation while enforcing power limits.

    The results and analysis presented in this thesis can be used by data center operators to improve the power-efficiency of servers and reduce overall operational costs while minimizing performance degradation. All of the software generated during this work, including controller source code, virtual machine images, scripts, and simulators, has been open-sourced.

  • 3.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ali-Eldin, A.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. College of Information and Computer Sciences, University of Massachusetts Amherst.
    Wadbro, Eddie
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    ALPACA: Application Performance Aware Server Power Capping2018Ingår i: ICAC 2018: 2018 IEEE International Conference on Autonomic Computing (ICAC), Trento, Italy, September 3-7, 2018, IEEE Computer Society, 2018, s. 41-50Konferensbidrag (Refereegranskat)
    Abstract [en]

    Server power capping limits the power consumption of a server to not exceed a specific power budget. This allows data center operators to reduce the peak power consumption at the cost of performance degradation of hosted applications. Previous work on server power capping rarely considers Quality-of-Service (QoS) requirements of consolidated services when enforcing the power budget. In this paper, we introduce ALPACA, a framework to reduce QoS violations and overall application performance degradation for consolidated services. ALPACA reduces unnecessary high power consumption when there is no performance gain, and divides the power among the running services in a way that reduces the overall QoS degradation when the power is scarce. We evaluate ALPACA using four applications: MediaWiki, SysBench, Sock Shop, and CloudSuite’s Web Search benchmark. Our experiments show that ALPACA reduces the operational costs of QoS penalties and electricity by up to 40% compared to a non optimized system. 

  • 4.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ali-Eldin, Ahmed
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Carlson, Trevor E.
    Department of Information Technology, Uppsala University, SE-751 05 Uppsala, Sweden.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Power-performance tradeoffs in data center servers: DVFS, CPUpinning, horizontal, and vertical scaling2018Ingår i: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 81, s. 114-128Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Dynamic Voltage and Frequency Scaling (DVFS), CPU pinning, horizontal, and vertical scaling, are four techniques that have been proposed as actuators to control the performance and energy consumption on data center servers. This work investigates the utility of these four actuators, and quantifies the power-performance tradeoffs associated with them. Using replicas of the German Wikipedia running on our local testbed, we perform a set of experiments to quantify the influence of DVFS, vertical and horizontal scaling, and CPU pinning on end-to-end response time (average and tail), throughput, and power consumption with different workloads. Results of the experiments show that DVFS rarely reduces the power consumption of underloaded servers by more than 5%, but it can be used to limit the maximal power consumption of a saturated server by up to 20% (at a cost of performance degradation). CPU pinning reduces the power consumption of underloaded server (by up to 7%) at the cost of performance degradation, which can be limited by choosing an appropriate CPU pinning scheme. Horizontal and vertical scaling improves both the average and tail response time, but the improvement is not proportional to the amount of resources added. The load balancing strategy has a big impact on the tail response time of horizontally scaled applications.

  • 5.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ali-Eldin, Ahmed
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. 2College of Information and Computer Sciences, University of Massachusetts Amherst.
    Wadbro, Eddie
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Power Shepherd: Application Performance AwarePower ShiftingManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Constantly growing power consumption of data centers is a major concern from environmental and economical reasons. Current approaches to reduce the negative consequences of high power consumption focus on limiting the peak power consumption. During the high workload periods, power consumption of highly utilized servers is throttled in order to stay within the power budget. However, the peak power reduction affects performance of hosted applications and thus leads to Quality of Service violations. In this paper, we introduce Power Shepherd, a hierarchical system for application performance aware power shifting. Power Shepherd reduces the data center operational costs by redistributing the available power among applications hosted in the cluster. This is achieved by, assigning server power budgets by the cluster controller, enforcing these power budgets using Running Average Power Limit (RAPL), and prioritizing applications within each server by adjusting the CPU scheduling configuration. We implement a prototype of the proposed solution and evaluate it in a real testbed equipped with power meters and using representative cloud applications. Our experiments show that Power Shepherd has potential to manage a cluster consisting of thousands of servers and limit the increase of operational costs by a significant amount when the cluster power budget is limited and the system is overutilized. Finally, we identify some outstanding challenges regarding model sensitivity and the fact that this approach in its current form is not beneficial to be used in all situations, e.g., when the system is underutilized.

  • 6.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Meyer, Vinicius
    Xavier, Miguel G.
    Ali-Eldin, Ahmed
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. College of Information and Computer Sciences, University of Massachusetts Amherst.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    De Rose, Cesar A. F.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Modeling and Simulation of QoS-AwarePower Budgeting in Cloud Data CentersManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Power budgeting is a commonly employed solution to reduce the negative consequences of high power consumption of large scale data centers. While various power budgeting techniques and algorithms have been proposed at different levels of data center infrastructures to optimize the power allocation to servers and hosted applications, testing them has been challenging with no available simulation platform that enables such testing for different scenarios and configurations. To facilitate evaluation and comparison of such techniques and algorithms, we introduce a simulation model for Quality-of-Service aware power budgeting and its implementation in CloudSim. We validate the proposed simulation model against a deployment on a real testbed, showcase simulator capabilities, and evaluate its scalability.

  • 7.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Tärneberg, William
    Dept. of Electrical and Information Technology, Lund University.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Kihl, Maria
    Dept. of Electrical and Information Technology, Lund University.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Telco Clouds: Modelling and Simulation2015Konferensbidrag (Refereegranskat)
  • 8.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A Sensor-Actuator Model for Data Center Optimization2015Ingår i: 2015 INTERNATIONAL CONFERENCE ON CLOUD AND AUTONOMIC COMPUTING (ICCAC), New York: IEEE Computer Society, 2015, s. 192-195Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cloud data centers commonly use virtualization technologies to provision compute capacity with a level of indirection between virtual machines and physical resources. In this paper we explore the use of that level of indirection as a means for autonomic data center configuration optimization and propose a sensor-actuator model to capture optimization-relevant relationships between data center events, monitored metrics (sensors data), and management actions (actuators). The model characterizes a wide spectrum of actions to help identify the suitability of different actions in specific situations, and outlines what (and how often) data needs to be monitored to capture, classify, and respond to events that affect the performance of data center operations.

  • 9.
    Krzywda, Jakub
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Östberg, Per-Olov
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A Sensor-Actuator Model for Data Center Optimization2015Rapport (Övrigt vetenskapligt)
  • 10. Papadopoulos, Alessandro Vittorio
    et al.
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Maggio, Martina
    Power-aware cloud brownout: response time and power consumption control2017Ingår i: 2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), IEEE, 2017, s. 2686-2691Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cloud computing infrastructures are powering most of the web hosting services that we use at all times. A recent failure in the Amazon cloud infrastructure made many of the website that we use on a hourly basis unavailable(1). This illustrates the importance of cloud applications being able to absorb peaks in workload, and at the same time to tune their power requirements to the power and energy capacity offered by the data center infrastructure. In this paper we combine an established technique for response time control - brownout - with power capping. We use cascaded control to take into account both the need for predictability in the response times (the inner loop), and the power cap (the outer loop). We execute tests on real machines to determine power usage and response times models and extend an existing simulator. We then evaluate the cascaded controller approach with a variety of workloads and both open-and closed-loop client models.

  • 11.
    Papadopoulos, Alessandro Vittorio
    et al.
    IDT, Mälardalen University, Sweden.
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Maggio, Martina
    Department of Automatic Control, Lund University, Sweden.
    Power-aware cloud brownout: Response time and power consumption control2017Ingår i: 2017 IEEE 56th Annual Conference on Decision and Control (CDC), IEEE, 2017, s. 2686-2691Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cloud computing infrastructures are powering most of the web hosting services that we use at all times. A recent failure in the Amazon cloud infrastructure made many of the website that we use on a hourly basis unavailable1. This illustrates the importance of cloud applications being able to absorb peaks in workload, and at the same time to tune their power requirements to the power and energy capacity offered by the data center infrastructure. In this paper we combine an established technique for response time control — brownout — with power capping. We use cascaded control to take into account both the need for predictability in the response times (the inner loop), and the power cap (the outer loop). We execute tests on real machines to determine power usage and response times models and extend an existing simulator. We then evaluate the cascaded controller approach with a variety of workloads and both open- and closed-loop client models.

  • 12. Stier, Christian
    et al.
    Domaschka, Jörg
    Koziolek, Anne
    Krach, Sebastian
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Reussner, Ralf
    Rapid Testing of IaaS Resource Management Algorithms via Cloud Middleware Simulation2018Ingår i: Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, ACM Digital Library, 2018, s. 184-191Konferensbidrag (Refereegranskat)
    Abstract [en]

    Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications in a virtualized environment without having to customize their applications to a specific Platform as a Service (PaaS) stack. It is common practice to host multiple Virtual Machines (VMs) on the same server to save resources. Traditionally, IaaS data center management required manual effort for optimization, e.g. by consolidating VM placement based on changes in usage patterns. Many resource management algorithms and frameworks have been developed to automate this process. Resource management algorithms are typically tested via experimentation or using simulation. The main drawback of both approaches is the high effort required to conduct the testing. Existing Cloud or IaaS simulators require the algorithm engineer to reimplement their algorithm against the simulator's API. Furthermore, the engineer manually needs to define the workload model used for algorithm testing. We propose an approach for the simulative analysis of IaaS Cloud infrastructure that allows algorithm engineers and data center operators to evaluate optimization algorithms without investing additional effort to reimplement them in a simulation environment. By leveraging runtime monitoring data, we automatically construct the simulation models used to test the algorithms. Our validation shows that algorithm tests conducted using our IaaS Cloud simulator match the measured behavior on actual hardware.

  • 13.
    Östberg, Per-Olov
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Groenda, Henning
    Wesner, Stefan
    Byrne, James
    Nikolopoulos, Dimitris S.
    Sheridan, Craig
    Krzywda, Jakub
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ali-Eldin, Ahmed
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Tordsson, Johan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elmroth, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stier, Christian
    Krogmann, Klaus
    Domaschka, Jörg
    Hauser, Christopher B.
    Byrne, PJ
    Svorobej, Sergej
    McCollum, Barry
    Papazachos, Zafeiros
    Whigham, Darren
    Rüth, Stefan
    Paurevic, Dragana
    The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation2014Ingår i: 2014 IEEE 6th International Conference on Cloud Computing Technology and Science, 2014, s. 26-31Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

1 - 13 av 13
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf