umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Elmroth, Erik
Alternative names
Publications (10 of 169) Show all publications
Ibidunmoye, O., Ali-Reza, R. & Elmroth, E. (2018). Adaptive Anomaly Detection in Performance Metric Streams. IEEE Transactions on Network and Service Management, 15(1), 217-231
Open this publication in new window or tab >>Adaptive Anomaly Detection in Performance Metric Streams
2018 (English)In: IEEE Transactions on Network and Service Management, ISSN 1932-4537, E-ISSN 1932-4537, Vol. 15, no 1, p. 217-231Article in journal (Refereed) Published
Abstract [en]

Continuous detection of performance anomalies such as service degradations has become critical in cloud and Internet services due to impact on quality of service and end-user experience. However, the volume and fast changing behaviour of metric streams have rendered it a challenging task. Many diagnosis frameworks often rely on thresholding with stationarity or normality assumption, or on complex models requiring extensive offline training. Such techniques are known to be prone to spurious false-alarms in online settings as metric streams undergo rapid contextual changes from known baselines. Hence, we propose two unsupervised incremental techniques following a two-step strategy. First, we estimate an underlying temporal property of the stream via adaptive learning and, then we apply statistically robust control charts to recognize deviations. We evaluated our techniques by replaying over 40 time-series streams from the Yahoo! Webscope S5 datasets as well as 4 other traces of real web service QoS and ISP traffic measurements. Our methods achieve high detection accuracy and few false-alarms, and better performance in general compared to an open-source package for time-series anomaly detection.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Performance Monitoring and Measurement, Computer Network Management, Quality of Service, Time Series Analysis, Anomaly Detection, Unsupervised Learning
National Category
Computer Systems
Research subject
Computer Science; Computing Science; Computer Systems
Identifiers
urn:nbn:se:umu:diva-142030 (URN)10.1109/TNSM.2017.2750906 (DOI)000427420100016 ()
Projects
Cloud Control
Funder
Swedish Research Council, C0590801
Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2018-08-07Bibliographically approved
Mehta, A. & Elmroth, E. (2018). Distributed Cost-Optimized Placement for Latency-Critical Applications in Heterogeneous Environments. In: Proceedings of the IEEE 15th International Conference on Autonomic Computing (ICAC): . Paper presented at 2018 IEEE International Conference on Autonomic Computing, Trento, Italy, September 3-7, 2018 (pp. 121-130).
Open this publication in new window or tab >>Distributed Cost-Optimized Placement for Latency-Critical Applications in Heterogeneous Environments
2018 (English)In: Proceedings of the IEEE 15th International Conference on Autonomic Computing (ICAC), 2018, p. 121-130Conference paper, Published paper (Refereed)
Abstract [en]

Mobile Edge Clouds (MECs) with 5G will create new opportunities to develop latency-critical applications in domains such as intelligent transportation systems, process automation, and smart grids. However, it is not clear how one can costefficiently deploy and manage a large number of such applications given the heterogeneity of devices, application performance requirements, and workloads. This work explores cost and performance dynamics for IoT applications, and proposes distributed algorithms for automatic deployment of IoT applications in heterogeneous environments. Placement algorithms were evaluated with respect to metrics including number of required runtimes, applications’ slowdown, and the number of iterations used to place an application. Iterative search-based distributed algorithms such as Size Interval Actor Assignment in Groups (SIAA G) outperformed random and bin packing algorithms, and are therefore recommended for this purpose. Size Interval Actor Assignment in Groups at Least Utilized Runtime (SIAA G LUR) algorithm is also recommended when minimizing the number of iterations is important. The tradeoff of using SIAA G algorithms is a few extra runtimes compared to bin packing algorithms.

Series
Proceedings of the International Conference on Autonomic Computing, ISSN 2474-0764
Keywords
Mobile Edge Clouds, Fog Computing, IoTs, Distributed algorithms
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-151457 (URN)10.1109/ICAC.2018.00022 (DOI)
Conference
2018 IEEE International Conference on Autonomic Computing, Trento, Italy, September 3-7, 2018
Available from: 2018-09-04 Created: 2018-09-04 Last updated: 2018-09-05
Krzywda, J., Ali-Eldin, A., Carlson, T. E., Östberg, P.-O. & Elmroth, E. (2018). Power-performance tradeoffs in data center servers: DVFS, CPUpinning, horizontal, and vertical scaling. Future generations computer systems, 81, 114-128
Open this publication in new window or tab >>Power-performance tradeoffs in data center servers: DVFS, CPUpinning, horizontal, and vertical scaling
Show others...
2018 (English)In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 81, p. 114-128Article in journal (Refereed) Published
Abstract [en]

Dynamic Voltage and Frequency Scaling (DVFS), CPU pinning, horizontal, and vertical scaling, are four techniques that have been proposed as actuators to control the performance and energy consumption on data center servers. This work investigates the utility of these four actuators, and quantifies the power-performance tradeoffs associated with them. Using replicas of the German Wikipedia running on our local testbed, we perform a set of experiments to quantify the influence of DVFS, vertical and horizontal scaling, and CPU pinning on end-to-end response time (average and tail), throughput, and power consumption with different workloads. Results of the experiments show that DVFS rarely reduces the power consumption of underloaded servers by more than 5%, but it can be used to limit the maximal power consumption of a saturated server by up to 20% (at a cost of performance degradation). CPU pinning reduces the power consumption of underloaded server (by up to 7%) at the cost of performance degradation, which can be limited by choosing an appropriate CPU pinning scheme. Horizontal and vertical scaling improves both the average and tail response time, but the improvement is not proportional to the amount of resources added. The load balancing strategy has a big impact on the tail response time of horizontally scaled applications.

Keywords
Power-performance tradeoffs, Dynamic Voltage and Frequency Scaling (DVFS), CPU pinning, Horizontal scaling, Vertical scaling
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-132427 (URN)10.1016/j.future.2017.10.044 (DOI)000423652200010 ()2-s2.0-85033772481 (Scopus ID)
Note

Originally published in thesis in manuscript form.

Available from: 2017-03-13 Created: 2017-03-13 Last updated: 2018-06-09Bibliographically approved
Gonzalo P., R., Elmroth, E., Östberg, P.-O. & Ramakrishnan, L. (2018). ScSF: a scheduling simulation framework. In: Proceedings of the 21th Workshop on Job Scheduling Strategies for Parallel Processing: . Paper presented at 21th Workshop on Job Scheduling Strategies for Parallel Processing (JSSP 2017), Orlando FL, USA, June 2nd, 2017 (pp. 152-173). Springer, 10773
Open this publication in new window or tab >>ScSF: a scheduling simulation framework
2018 (English)In: Proceedings of the 21th Workshop on Job Scheduling Strategies for Parallel Processing, Springer, 2018, Vol. 10773, p. 152-173Conference paper, Published paper (Refereed)
Abstract [en]

High-throughput and data-intensive applications are increasingly present, often composed as workflows, in the workloads of current HPC systems. At the same time, trends for future HPC systems point towards more heterogeneous systems with deeper I/O and memory hierarchies. However, current HPC schedulers are designed to support classical large tightly coupled parallel jobs over homogeneous systems. Therefore, There is an urgent need to investigate new scheduling algorithms that can manage the future workloads on HPC systems. However, there is a lack of appropriate models and frameworks to enable development, testing, and validation of new scheduling ideas.

In this paper, we present an open-source scheduler simulation framework (ScSF) that covers all the steps of scheduling research through simulation. ScSF provides capabilities for workload modeling, workload generation, system simulation, comparative workload analysis, and experiment orchestration. The simulator is designed to be run over a distributed computing infrastructure enabling to test at scale. We describe in detail a use case of ScSF to develop new techniques to manage scientific workflows in a batch scheduler. In the use case, such technique was implemented in the framework scheduler. For evaluation purposes, 1728 experiments, equivalent to 33 years of simulated time, were run in a deployment of ScSF over a distributed infrastructure of 17 compute nodes during two months. Finally, the experimental results were analyzed in the framework to judge that the technique minimizes workflows’ turnaround time without over-allocating resources. Finally, we discuss lessons learned from our experiences that will help future researchers.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
slurm, simulation, scheduling, HPC, High Performance Computing, workload, generation, analysis
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-132981 (URN)10.1007/978-3-319-77398-8_9 (DOI)000444863700009 ()978-3-319-77397-1 (ISBN)978-3-319-77398-8 (ISBN)
Conference
21th Workshop on Job Scheduling Strategies for Parallel Processing (JSSP 2017), Orlando FL, USA, June 2nd, 2017
Funder
eSSENCE - An eScience CollaborationSwedish Research Council, C0590801
Note

Work also supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) and we used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, supported by the Officece of Science of the U.S. Department of Energy, both under Contract No. DE-AC02-05CH11231.

Available from: 2017-03-27 Created: 2017-03-27 Last updated: 2018-10-05Bibliographically approved
Rodrigo, G. P., Östberg, P.-O., Elmroth, E., Antypas, K., Gerber, R. & Ramakrishnan, L. (2018). Towards understanding HPC users and systems: a NERSC case study. Journal of Parallel and Distributed Computing, 111, 206-221
Open this publication in new window or tab >>Towards understanding HPC users and systems: a NERSC case study
Show others...
2018 (English)In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 111, p. 206-221Article in journal (Refereed) Published
Abstract [en]

High performance computing (HPC) scheduling landscape currently faces new challenges due to the changes in the workload. Previously, HPC centers were dominated by tightly coupled MPI jobs. HPC workloads increasingly include high-throughput, data-intensive, and stream-processing applications. As a consequence, workloads are becoming more diverse at both application and job levels, posing new challenges to classical HPC schedulers. There is a need to understand the current HPC workloads and their evolution to facilitate informed future scheduling research and enable efficient scheduling in future HPC systems.

In this paper, we present a methodology to characterize workloads and assess their heterogeneity, at a particular time period and its evolution over time. We apply this methodology to the workloads of three systems (Hopper, Edison, and Carver) at the National Energy Research Scientific Computing Center (NERSC). We present the resulting characterization of jobs, queues, heterogeneity, and performance that includes detailed information of a year of workload (2014) and evolution through the systems' lifetime (2010–2014).

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Workload analysis, Supercomputer, HPC, Scheduling, NERSC, Heterogeneity, k-means
National Category
Computer Sciences
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-132980 (URN)10.1016/j.jpdc.2017.09.002 (DOI)000415028900017 ()
Funder
eSSENCE - An eScience CollaborationEU, Horizon 2020, 610711EU, Horizon 2020, 732667Swedish Research Council, C0590801
Note

Work also supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) and we used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, supported by the Officece of Science of the U.S. Department of Energy, both under Contract No. DE-AC02-05CH11231.

Originally included in thesis in manuscript form in 2017.

Available from: 2017-03-27 Created: 2017-03-27 Last updated: 2018-06-25Bibliographically approved
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. Umeå: Umeå universitet
Open this publication in new window or tab >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (English)Report (Other academic)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoTs) applications in MECs. The scheduler at the higher level runs periodically – monitors system state and the performance of applications – and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application’s performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example.

We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Place, publisher, year, edition, pages
Umeå: Umeå universitet, 2018. p. 28
Series
Report / UMINF, ISSN 0348-0542 ; 18.11
Keywords
Edge/Fog Computing, Hierarchical Resource Allocation, IoTs, Mobile Edge Clouds
National Category
Computer Systems
Research subject
Computer Systems
Identifiers
urn:nbn:se:umu:diva-151455 (URN)
Available from: 2018-09-04 Created: 2018-09-04 Last updated: 2018-09-07Bibliographically approved
Ibidunmoye, O., Lakew, E. B. & Elmroth, E. (2017). A Black-box Approach for Detecting Systems Anomalies in Virtualized Environments. In: 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017): . Paper presented at 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), Tucson, Arizona, USA, 18–22 September 2017 (pp. 22-33). IEEE
Open this publication in new window or tab >>A Black-box Approach for Detecting Systems Anomalies in Virtualized Environments
2017 (English)In: 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), IEEE, 2017, p. 22-33Conference paper, Published paper (Refereed)
Abstract [en]

Virtualization technologies allow cloud providers to optimize server utilization and cost by co-locating services in as few servers as possible. Studies have shown how applications in multi-tenant environments are susceptible to systems anomalies such as abnormal resource usage due to performance interference. Effective detection of such anomalies requires techniques that can adapt autonomously with dynamic service workloads, require limited instrumentation to cope with diverse applications services, and infer relationship between anomalies non-intrusively to avoid "alarm fatigue" due to scale. We propose a black-box framework that includes an unsupervised prediction-based mechanism for automated anomaly detection in multi-dimensional resource behaviour of datacenter nodes and a graph-theoretic technique for ranking anomalous nodes across the datacenter. The proposed framework is evaluated using resource traces of over 100 virtual machines obtained from a production cluster as well as traces obtained from an experimental testbed under realistic service composition. The technique achieve average normalized root mean squared forecast error and R^2 of (0.92, 0.07) across hosts servers and (0.70, 0.39) across virtual machines. Also, the average detection rate is 88% while explaining 62% of SLA violations with an average lead-time of 6 time-points when the testbed is actively perturbed under three contention scenarios. 

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Anomaly Detection, Performance Anomaly Detection, Performance Diagnosis, Cloud Computing, Virtualized Services, Unsupervised Learning, Time Series Analysis, Quality of Service
National Category
Computer Systems
Research subject
Computer Systems; Computing Science
Identifiers
urn:nbn:se:umu:diva-142031 (URN)10.1109/ICCAC.2017.10 (DOI)978-1-5386-1939-1 (ISBN)
Conference
2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC 2017), Tucson, Arizona, USA, 18–22 September 2017
Projects
Cloud Control
Funder
Swedish Research Council, C0590801
Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2018-06-09Bibliographically approved
Goumas, G., Nikas, K., Lakew, E. B., Kotselidis, C., Attwood, A., Elmroth, E., . . . Koziris, N. (2017). ACTiCLOUD: Enabling the Next Generation of Cloud Applications. In: Lee, K Liu, L (Ed.), 2017 IEEE 37TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2017): . Paper presented at 37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA (pp. 1836-1845). IEEE Computer Society
Open this publication in new window or tab >>ACTiCLOUD: Enabling the Next Generation of Cloud Applications
Show others...
2017 (English)In: 2017 IEEE 37TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2017) / [ed] Lee, K Liu, L, IEEE Computer Society, 2017, p. 1836-1845Conference paper, Published paper (Refereed)
Abstract [en]

Despite their proliferation as a dominant computing paradigm, cloud computing systems lack effective mechanisms to manage their vast amounts of resources efficiently. Resources are stranded and fragmented, ultimately limiting cloud systems' applicability to large classes of critical applications that pose non-moderate resource demands. Eliminating current technological barriers of actual fluidity and scalability of cloud resources is essential to strengthen cloud computing's role as a critical cornerstone for the digital economy. ACTiCLOUD proposes a novel cloud architecture that breaks the existing scale-up and share-nothing barriers and enables the holistic management of physical resources both at the local cloud site and at distributed levels. Specifically, it makes advancements in the cloud resource management stacks by extending state-of-the-art hypervisor technology beyond the physical server boundary and localized cloud management system to provide a holistic resource management within a rack, within a site, and across distributed cloud sites. On top of this, ACTiCLOUD will adapt and optimize system libraries and runtimes (e.g., JVM) as well as ACTiCLOUD-native applications, which are extremely demanding, and critical classes of applications that currently face severe difficulties in matching their resource requirements to state-of-the-art cloud offerings.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
IEEE International Conference on Distributed Computing Systems, ISSN 1063-6927
Keywords
cloud computing, resource management, in-memory databases, resource disaggregation, scale-up, rackscale hypervisor
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-142014 (URN)10.1109/ICDCS.2017.252 (DOI)000412759500173 ()978-1-5386-1791-5 (ISBN)978-1-5386-1792-2 (ISBN)978-1-5386-1793-9 (ISBN)
Conference
37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA
Available from: 2017-11-20 Created: 2017-11-20 Last updated: 2018-06-09Bibliographically approved
Ibidunmoye, O., Moghadam, M. H., Lakew, E. B. & Elmroth, E. (2017). Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments. In: 10th IEEE/ACM International Conference on Utility and Cloud Computing, 2017, Dec 5-8, Austin TX, USA: . Paper presented at 10th IEEE/ACM International Conference on Utility and Cloud Computing, Austin, Texas, USA, December 5-8 2017. IEEE/ACM
Open this publication in new window or tab >>Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments
2017 (English)In: 10th IEEE/ACM International Conference on Utility and Cloud Computing, 2017, Dec 5-8, Austin TX, USA, IEEE/ACM , 2017Conference paper, Published paper (Refereed)
Abstract [en]

Designing efficient control mechanisms to meet strict performance requirements with respect tochanging workload demands without sacrificing resource efficiency remains a challenge in cloudinfrastructures. A popular approach is fine-grained resource provisioning via auto-scaling mechanisms that rely on either threshold-based adaptation rules or sophisticated queuing/control-theoretic models. While it is difficult at design time to specify optimal threshold rules, it is even more challenging inferring precise performance models for the multitude of services. Recently, reinforcement learning have been applied to address this challenge. However, such approaches require many learning trials to stabilize at the beginning and when operational conditions vary thereby limiting their application under dynamic workloads. To this end, we extend the standard reinforcement learning approach in two ways: a) we formulate the system state as a fuzzy space and b) exploit a set of cooperative agents to explore multiple fuzzy states in parallel to speed up learning. Through multiple experiments on a real virtualized testbed, we demonstrate that our approach converges quickly, meets performance targets at high efficiency without explicit service models.

Place, publisher, year, edition, pages
IEEE/ACM, 2017
Keywords
Performance control, Resource allocation, Quality of service, Reinforcement learning, Autoscaling, Autonomic computing
National Category
Computer Systems
Research subject
Computer Systems; Computing Science
Identifiers
urn:nbn:se:umu:diva-142032 (URN)10.1145/3147213.3147225 (DOI)978-1-4503-5149-2 (ISBN)
Conference
10th IEEE/ACM International Conference on Utility and Cloud Computing, Austin, Texas, USA, December 5-8 2017
Projects
Cloud Control
Funder
Swedish Research Council, C0590801
Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2018-06-09
Mehta, A., Baddour, R., Svensson, F., Gustafsson, H. & Elmroth, E. (2017). Calvin Constrained: A Framework for IoT Applications in Heterogeneous Environments. In: Lee, K., Liu, L. (Ed.), 2017 IEEE 37TH International Conference on Distributed Computing Systems (ICDCS 2017): . Paper presented at 37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA (pp. 1063-1073). IEEE Computer Society
Open this publication in new window or tab >>Calvin Constrained: A Framework for IoT Applications in Heterogeneous Environments
Show others...
2017 (English)In: 2017 IEEE 37TH International Conference on Distributed Computing Systems (ICDCS 2017) / [ed] Lee, K., Liu, L., IEEE Computer Society, 2017, p. 1063-1073Conference paper, Published paper (Refereed)
Abstract [en]

Calvin is an IoT framework for application development, deployment and execution in heterogeneous environments, that includes clouds, edge resources, and embedded or constrained resources. Inside Calvin, all the distributed resources are viewed as one environment by the application. The framework provides multi-tenancy and simplifies development of IoT applications, which are represented using a dataflow of application components (named actors) and their communication. The idea behind Calvin poses similarity with the serverless architecture and can be seen as Actor as a Service instead of Function as a Service. This makes Calvin very powerful as it does not only scale actors quickly but also provides an easy actor migration capability. In this work, we propose Calvin Constrained, an extension to the Calvin framework to cover resource-constrained devices. Due to limited memory and processing power of embedded devices, the constrained side of the framework can only support a limited subset of the Calvin features. The current implementation of Calvin Constrained supports actors implemented in C as well as Python, where the support for Python actors is enabled by using MicroPython as a statically allocated library, by this we enable the automatic management of state variables and enhance code re-usability. As would be expected, Python-coded actors demand more resources over C-coded ones. We show that the extra resources needed are manageable on current off-the-shelve micro-controller-equipped devices when using the Calvin framework.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
IEEE International Conference on Distributed Computing Systems, ISSN 1063-6927
Keywords
IoT, Distributed Cloud, Serverless Architecture, Dataflow Application Development Model
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-142013 (URN)10.1109/ICDCS.2017.181 (DOI)000412759500098 ()978-1-5386-1791-5 (ISBN)978-1-5386-1792-2 (ISBN)978-1-5386-1793-9 (ISBN)
Conference
37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA
Funder
Swedish Research Council, C0590801
Available from: 2017-11-21 Created: 2017-11-21 Last updated: 2018-09-07Bibliographically approved
Organisations

Search in DiVA

Show all publications