umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Bayuh Lakew, Ewnetu
Alternative names
Publications (10 of 24) Show all publications
Karakostas, V., Goumas, G., Bayuh Lakew, E., Elmroth, E., Gerangelos, S., Kolberg, S., . . . Koziris, N. (2018). Efficient Resource Management for Data Centers: The ACTiCLOUD Approach. In: Mudge T., Pnevmatikatos D.N. (Ed.), 2018 International conference on embedded computer systems: architectures, modeling, and simulation (SAMOS XVIII). Paper presented at SAMOS XVIII, July 15–19, 2018, Pythagorion, Samos Island, Greece (pp. 244-246). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Efficient Resource Management for Data Centers: The ACTiCLOUD Approach
Show others...
2018 (English)In: 2018 International conference on embedded computer systems: architectures, modeling, and simulation (SAMOS XVIII) / [ed] Mudge T., Pnevmatikatos D.N., Association for Computing Machinery (ACM), 2018, p. 244-246Conference paper, Published paper (Refereed)
Abstract [en]

Despite their proliferation as a dominant computing paradigm, cloud computing systems lack effective mechanisms to manage their vast resources efficiently. Resources are stranded and fragmented, limiting cloud applicability only to classes of applications that pose moderate resource demands. In addition, the need for reduced cost through consolidation introduces performance interference, as multiple VMs are co-located on the same nodes. To avoid such issues, current providers follow a rather conservative approach regarding resource management that leads to significant underutilization. ACTiCLOUD is a three-year Horizon 2020 project that aims at creating a novel cloud architecture that breaks existing scale-up and share-nothing barriers and enables the holistic management of physical resources, at both local and distributed cloud site levels. This extended abstract provides a brief overview of the resource management part of ACTiCLOUD, focusing on the design principles and the components.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Series
ACM International Conference Proceeding Series
Keywords
resource management, resource efficiency, cloud computing, data centers, in-memory databases, NUMA, heterogeneous, scale-up/out
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-154435 (URN)10.1145/3229631.3236095 (DOI)000475843000033 ()2-s2.0-85060997517 (Scopus ID)978-1-4503-6494-2 (ISBN)
Conference
SAMOS XVIII, July 15–19, 2018, Pythagorion, Samos Island, Greece
Available from: 2018-12-18 Created: 2018-12-18 Last updated: 2019-09-05Bibliographically approved
Bayuh Lakew, E., Birke, R., Perez, J. F., Elmroth, E. & Chen, L. Y. (2018). SmallTail: Scaling Cores and Probabilistic Cloning Requests for Web Systems. In: 15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018): . Paper presented at 15th IEEE International Conference on Autonomic Computing (ICAC), SEP 03-07, 2018, Trento, ITALY (pp. 31-40). IEEE
Open this publication in new window or tab >>SmallTail: Scaling Cores and Probabilistic Cloning Requests for Web Systems
Show others...
2018 (English)In: 15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018), IEEE , 2018, p. 31-40Conference paper, Published paper (Refereed)
Abstract [en]

Users quality of experience on web systems are largely determined by the tail latency, e.g., 95th percentile. Scaling resources along, e.g., the number of virtual cores per VM, is shown to be effective to meet the average latency but falls short in taming the latency tail in the cloud where the performance variability is higher. The prior art shows the prominence of increasing the request redundancy to curtail the latency either in the off-line setting or without scaling-in cores of virtual machines. In this paper, we propose an opportunistic scaler, termed SmallTail, which aims to achieve stringent targets of tail latency while provisioning a minimum amount of resources and keeping them well utilized. Against dynamic workloads, SmallTail simultaneously adjusts the core provisioning per VM and probabilistically replicates requests so as to achieve the tail latency target. The core of SmallTail is a two level controller, where the outer loops controls the core provision per distributed VMs and the inner loop controls the clones in a finer granularity. We also provide theoretical analysis on the steady-state latency for a given probabilistic replication that clones one out of N arriving requests. We extensively evaluate SmallTail on three different web systems, namely web commerce, web searching, and web bulletin board. Our testbed results show that SmallTail can ensure the 95th latency below 1000 ms using up to 53% less cores compared to the strategy of constant cloning, whereas scaling-core only solution exceeds the latency target by up to 70%.

Place, publisher, year, edition, pages
IEEE, 2018
Series
Proceedings of the International Conference on Autonomic Computing, ISSN 2474-0756
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-155047 (URN)10.1109/ICAC.2018.00013 (DOI)000450120900004 ()978-1-5386-5139-1 (ISBN)
Conference
15th IEEE International Conference on Autonomic Computing (ICAC), SEP 03-07, 2018, Trento, ITALY
Available from: 2019-01-07 Created: 2019-01-07 Last updated: 2019-01-07Bibliographically approved
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. Umeå: Umeå universitet
Open this publication in new window or tab >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (English)Report (Other academic)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoTs) applications in MECs. The scheduler at the higher level runs periodically – monitors system state and the performance of applications – and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application’s performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example.

We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Place, publisher, year, edition, pages
Umeå: Umeå universitet, 2018. p. 28
Series
Report / UMINF, ISSN 0348-0542 ; 18.11
Keywords
Edge/Fog Computing, Hierarchical Resource Allocation, IoTs, Mobile Edge Clouds
National Category
Computer Systems
Research subject
Computer Systems
Identifiers
urn:nbn:se:umu:diva-151455 (URN)
Available from: 2018-09-04 Created: 2018-09-04 Last updated: 2018-09-07Bibliographically approved
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. In: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC): . Paper presented at 37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018. IEEE
Open this publication in new window or tab >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (English)In: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC), IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoT) applications in MECs. The scheduler at the higher level runs periodically - monitors system state and the performance of applications - and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application's performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example. We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE International Performance Computing and Communications Conference (IPCCC), ISSN 1097-2641
National Category
Computer Systems Computer Engineering
Identifiers
urn:nbn:se:umu:diva-160322 (URN)10.1109/PCCC.2018.8711075 (DOI)000469326500052 ()978-1-5386-6808-5 (ISBN)978-1-5386-6807-8 (ISBN)978-1-5386-6809-2 (ISBN)
Conference
37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018
Funder
EU, Horizon 2020, ICT30
Available from: 2019-06-17 Created: 2019-06-17 Last updated: 2019-06-17Bibliographically approved
Ibidunmoye, O., Lakew, E. B. & Elmroth, E. (2017). A Black-box Approach for Detecting Systems Anomalies in Virtualized Environments. In: 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017): . Paper presented at 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), Tucson, Arizona, USA, 18–22 September 2017 (pp. 22-33). IEEE
Open this publication in new window or tab >>A Black-box Approach for Detecting Systems Anomalies in Virtualized Environments
2017 (English)In: 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), IEEE, 2017, p. 22-33Conference paper, Published paper (Refereed)
Abstract [en]

Virtualization technologies allow cloud providers to optimize server utilization and cost by co-locating services in as few servers as possible. Studies have shown how applications in multi-tenant environments are susceptible to systems anomalies such as abnormal resource usage due to performance interference. Effective detection of such anomalies requires techniques that can adapt autonomously with dynamic service workloads, require limited instrumentation to cope with diverse applications services, and infer relationship between anomalies non-intrusively to avoid "alarm fatigue" due to scale. We propose a black-box framework that includes an unsupervised prediction-based mechanism for automated anomaly detection in multi-dimensional resource behaviour of datacenter nodes and a graph-theoretic technique for ranking anomalous nodes across the datacenter. The proposed framework is evaluated using resource traces of over 100 virtual machines obtained from a production cluster as well as traces obtained from an experimental testbed under realistic service composition. The technique achieve average normalized root mean squared forecast error and R^2 of (0.92, 0.07) across hosts servers and (0.70, 0.39) across virtual machines. Also, the average detection rate is 88% while explaining 62% of SLA violations with an average lead-time of 6 time-points when the testbed is actively perturbed under three contention scenarios. 

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Anomaly Detection, Performance Anomaly Detection, Performance Diagnosis, Cloud Computing, Virtualized Services, Unsupervised Learning, Time Series Analysis, Quality of Service
National Category
Computer Systems
Research subject
Computer Systems; business data processing
Identifiers
urn:nbn:se:umu:diva-142031 (URN)10.1109/ICCAC.2017.10 (DOI)978-1-5386-1939-1 (ISBN)
Conference
2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), Tucson, Arizona, USA, 18–22 September 2017
Projects
Cloud Control
Funder
Swedish Research Council, C0590801
Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2018-06-09Bibliographically approved
Goumas, G., Nikas, K., Lakew, E. B., Kotselidis, C., Attwood, A., Elmroth, E., . . . Koziris, N. (2017). ACTiCLOUD: Enabling the Next Generation of Cloud Applications. In: Lee, K Liu, L (Ed.), 2017 IEEE 37TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2017): . Paper presented at 37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA (pp. 1836-1845). IEEE Computer Society
Open this publication in new window or tab >>ACTiCLOUD: Enabling the Next Generation of Cloud Applications
Show others...
2017 (English)In: 2017 IEEE 37TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2017) / [ed] Lee, K Liu, L, IEEE Computer Society, 2017, p. 1836-1845Conference paper, Published paper (Refereed)
Abstract [en]

Despite their proliferation as a dominant computing paradigm, cloud computing systems lack effective mechanisms to manage their vast amounts of resources efficiently. Resources are stranded and fragmented, ultimately limiting cloud systems' applicability to large classes of critical applications that pose non-moderate resource demands. Eliminating current technological barriers of actual fluidity and scalability of cloud resources is essential to strengthen cloud computing's role as a critical cornerstone for the digital economy. ACTiCLOUD proposes a novel cloud architecture that breaks the existing scale-up and share-nothing barriers and enables the holistic management of physical resources both at the local cloud site and at distributed levels. Specifically, it makes advancements in the cloud resource management stacks by extending state-of-the-art hypervisor technology beyond the physical server boundary and localized cloud management system to provide a holistic resource management within a rack, within a site, and across distributed cloud sites. On top of this, ACTiCLOUD will adapt and optimize system libraries and runtimes (e.g., JVM) as well as ACTiCLOUD-native applications, which are extremely demanding, and critical classes of applications that currently face severe difficulties in matching their resource requirements to state-of-the-art cloud offerings.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
IEEE International Conference on Distributed Computing Systems, ISSN 1063-6927
Keywords
cloud computing, resource management, in-memory databases, resource disaggregation, scale-up, rackscale hypervisor
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-142014 (URN)10.1109/ICDCS.2017.252 (DOI)000412759500173 ()978-1-5386-1791-5 (ISBN)978-1-5386-1792-2 (ISBN)978-1-5386-1793-9 (ISBN)
Conference
37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA
Available from: 2017-11-20 Created: 2017-11-20 Last updated: 2018-06-09Bibliographically approved
Ibidunmoye, O., Moghadam, M. H., Lakew, E. B. & Elmroth, E. (2017). Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments. In: UCC '17 Proceedings of the10th International Conference on Utility and Cloud Computing: . Paper presented at 10th IEEE/ACM International Conference on Utility and Cloud Computing, Austin, Texas, USA, December 5-8, 2017 (pp. 19-28). IEEE/ACM
Open this publication in new window or tab >>Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments
2017 (English)In: UCC '17 Proceedings of the10th International Conference on Utility and Cloud Computing, IEEE/ACM , 2017, p. 19-28Conference paper, Published paper (Refereed)
Abstract [en]

Designing efficient control mechanisms to meet strict performance requirements with respect tochanging workload demands without sacrificing resource efficiency remains a challenge in cloudinfrastructures. A popular approach is fine-grained resource provisioning via auto-scaling mechanisms that rely on either threshold-based adaptation rules or sophisticated queuing/control-theoretic models. While it is difficult at design time to specify optimal threshold rules, it is even more challenging inferring precise performance models for the multitude of services. Recently, reinforcement learning have been applied to address this challenge. However, such approaches require many learning trials to stabilize at the beginning and when operational conditions vary thereby limiting their application under dynamic workloads. To this end, we extend the standard reinforcement learning approach in two ways: a) we formulate the system state as a fuzzy space and b) exploit a set of cooperative agents to explore multiple fuzzy states in parallel to speed up learning. Through multiple experiments on a real virtualized testbed, we demonstrate that our approach converges quickly, meets performance targets at high efficiency without explicit service models.

Place, publisher, year, edition, pages
IEEE/ACM, 2017
Keywords
Performance control, Resource allocation, Quality of service, Reinforcement learning, Autoscaling, Autonomic computing
National Category
Computer Systems
Research subject
Computer Systems; business data processing
Identifiers
urn:nbn:se:umu:diva-142032 (URN)10.1145/3147213.3147225 (DOI)978-1-4503-5149-2 (ISBN)
Conference
10th IEEE/ACM International Conference on Utility and Cloud Computing, Austin, Texas, USA, December 5-8, 2017
Projects
Cloud Control
Funder
Swedish Research Council, C0590801
Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2019-06-19Bibliographically approved
Lakew, E. B., Papadopoulos, A. V., Maggio, M., Klein, C. & Elmroth, E. (2017). KPI-agnostic Control for Fine-Grained Vertical Elasticity. In: 2017 17TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID): . Paper presented at 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), MAY 14-17, 2017, Madrid, SPAIN (pp. 589-598). IEEE
Open this publication in new window or tab >>KPI-agnostic Control for Fine-Grained Vertical Elasticity
Show others...
2017 (English)In: 2017 17TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), IEEE , 2017, p. 589-598Conference paper, Published paper (Refereed)
Abstract [en]

Applications hosted in the cloud have become indispensable in several contexts, with their performance often being key to business operation and their running costs needing to be minimized. To minimize running costs, most modern virtualization technologies such as Linux Containers, Xen, and KVM offer powerful resource control primitives for individual provisioning - that enable adding or removing of fraction of cores and/or megabytes of memory for as short as few seconds. Despite the technology being ready, there is a lack of proper techniques for fine-grained resource allocation, because there is an inherent challenge in determining the correct composition of resources an application needs, with varying workload, to ensure deterministic performance.

This paper presents a control-based approach for the management of multiple resources, accounting for the resource consumption, together with the application performance, enabling fine-grained vertical elasticity. The control strategy ensures that the application meets the target performance indicators, consuming as less resources as possible. We carried out an extensive set of experiments using different applications – interactive with response-time requirements, as well as non-interactive with throughput desires – by varying the workload mixes of each application over time. The results demonstrate that our solution precisely provides guaranteed performance while at the same time avoiding both resource over- and under-provisioning.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE-ACM International Symposium on Cluster Cloud and Grid Computing, ISSN 2376-4414
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-146250 (URN)10.1109/CCGRID.2017.71 (DOI)000426912900063 ()978-1-5090-6611-7 (ISBN)
Conference
17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), MAY 14-17, 2017, Madrid, SPAIN
Available from: 2018-05-16 Created: 2018-05-16 Last updated: 2018-06-09Bibliographically approved
diva2:1039218
Open this publication in new window or tab >>A hybrid cloud controller for vertical memory elasticity: a control-theoretic approach
Show others...
2016 (English)In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 65, p. 57-72Article in journal (Refereed) Published
Abstract [en]

Web-facing applications are expected to provide certain performance guarantees despite dynamic and continuous workload changes. As a result, application owners are using cloud computing as it offers the ability to dynamically provision computing resources (e.g., memory, CPU) in response to changes in workload demands to meet performance targets and eliminates upfront costs. Horizontal, vertical, and the combination of the two are the possible dimensions that cloud application can be scaled in terms of the allocated resources. In vertical elasticity as the focus of this work, the size of virtual machines (VMs) can be adjusted in terms of allocated computing resources according to the runtime workload. A commonly used vertical resource elasticity approach is realized by deciding based on resource utilization, named capacity-based. While a new trend is to use the application performance as a decision making criterion, and such an approach is named performance-based. This paper discusses these two approaches and proposes a novel hybrid elasticity approach that takes into account both the application performance and the resource utilization to leverage the benefits of both approaches. The proposed approach is used in realizing vertical elasticity of memory (named as vertical memory elasticity), where the allocated memory of the VM is auto-scaled at runtime. To this aim, we use control theory to synthesize a feedback controller that meets the application performance constraints by auto-scaling the allocated memory, i.e., applying vertical memory elasticity. Different from the existing vertical resource elasticity approaches, the novelty of our work lies in utilizing both the memory utilization and application response time as decision making criteria. To verify the resource efficiency and the ability of the controller in handling unexpected workloads, we have implemented the controller on top of the Xen hypervisor and performed a series of experiments using the RUBBoS interactive benchmark application, under synthetic and real workloads including Wikipedia and FIFA. The results reveal that the hybrid controller meets the application performance target with better performance stability (i.e., lower standard deviation of response time), while achieving a high memory utilization (close to 83%), and allocating less memory compared to all other baseline controllers.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Cloud computing, Control theory, Vertical memory elasticity, Interactive application performance, Memory utilization
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-126723 (URN)10.1016/j.future.2016.05.028 (DOI)000383826700005 ()
Available from: 2016-10-21 Created: 2016-10-13 Last updated: 2018-06-09Bibliographically approved
Tomas, L., Bayuh Lakew, E. & Elmroth, E. (2016). Service Level and Performance Aware Dynamic Resource Allocation in Overbooked Data Centers. In: 2016 16TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID): . Paper presented at 16th International Conference on Cluster, Cloud and Grid Computing (CCGrid) (pp. 42-51).
Open this publication in new window or tab >>Service Level and Performance Aware Dynamic Resource Allocation in Overbooked Data Centers
2016 (English)In: 2016 16TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), 2016, p. 42-51Conference paper, Published paper (Refereed)
Abstract [en]

Many cloud computing providers use overbooking to increase their low utilization ratios. This however increases the risk of performance degradation due to interference among co-located VMs. To address this problem we present a service level and performance aware controller that: (1) provides performance isolation for high QoS VMs; and (2) reduces the VM interference between low QoS VMs by dynamically mapping virtual cores to physical cores, thus limiting the amount of resources that each VM can access depending on their performance. Our evaluation based on real cloud applications and both stress, synthetic and realistic workloads demonstrates that a more efficient use of the resources is achieved, dynamically allocating the available capacity to the applications that need it more, which in turn lead to a more stable and predictable performance over time.

Series
IEEE-ACM International Symposium on Cluster Cloud and Grid Computing, ISSN 2376-4414
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-124169 (URN)10.1109/CCGrid.2016.29 (DOI)000382529800005 ()978-1-5090-2453-7 (ISBN)
Conference
16th International Conference on Cluster, Cloud and Grid Computing (CCGrid)
Available from: 2016-07-25 Created: 2016-07-25 Last updated: 2018-06-07Bibliographically approved
Organisations

Search in DiVA

Show all publications