umu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
BETA
Tordsson, Johan
Alternativa namn
Publikationer (10 of 77) Visa alla publikationer
Souza, A., Rezaei, M., Laure, E. & Tordsson, J. (2019). Hybrid Resource Management for HPC and Data Intensive Workloads. In: 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID): . Paper presented at CCGrid2019, 14th - 17th of May Larnaca, Cyprus (pp. 399-409). Los Alamitos: IEEE Computer Society
Öppna denna publikation i ny flik eller fönster >>Hybrid Resource Management for HPC and Data Intensive Workloads
2019 (Engelska)Ingår i: 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Los Alamitos: IEEE Computer Society, 2019, s. 399-409Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Traditionally, High Performance Computing (HPC) and Data Intensive (DI) workloads have been executed on separate hardware using different tools for resource and application management. With increasing convergence of these paradigms, where modern applications are composed of both types of jobs in complex workflows, this separation becomes a growing overhead and the need for a common computation platform for both application areas increases. Executing both application classes on the same hardware not only enables hybrid workflows, but can also increase the usage efficiency of the system, as often not all available hardware is fully utilized by an application. While HPC systems are typically managed in a coarse grained fashion, allocating a fixed set of resources exclusively to an application, DI systems employ a finer grained regime, enabling dynamic resource allocation and control based on application needs. On the path to full convergence, a useful and less intrusive step is a hybrid resource management system that allows the execution of DI applications on top of standard HPC scheduling systems.In this paper we present the architecture of a hybrid system enabling dual-level scheduling for DI jobs in HPC infrastructures. Our system takes advantage of real-time resource utilization monitoring to efficiently co-schedule HPC and DI applications. The architecture is easily adaptable and extensible to current and new types of distributed workloads, allowing efficient combination of hybrid workloads on HPC resources with increased job throughput and higher overall resource utilization. The architecture is implemented based on the Slurm and Mesos resource managers for HPC and DI jobs. Our experimental evaluation in a real cluster based on a set of representative HPC and DI applications demonstrate that our hybrid architecture improves resource utilization by 20%, with 12% decrease on queue makespan while still meeting all deadlines for HPC jobs.

Ort, förlag, år, upplaga, sidor
Los Alamitos: IEEE Computer Society, 2019
Nyckelord
Resource Management, High Performance Computing, Data Intensive Computing, Mesos, Slurm, Boostrapping
Nationell ämneskategori
Datorsystem
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-155619 (URN)10.1109/CCGRID.2019.00054 (DOI)978-1-7281-0913-8 (ISBN)978-1-7281-0912-1 (ISBN)
Konferens
CCGrid2019, 14th - 17th of May Larnaca, Cyprus
Anmärkning

Originally included in thesis in manuscript form

Tillgänglig från: 2019-01-24 Skapad: 2019-01-24 Senast uppdaterad: 2019-07-17Bibliografiskt granskad
Souza, A., Papadopoulos, A. V., Tomás Bolivar, L., Gilbert, D. & Tordsson, J. (2018). Hybrid Adaptive Checkpointing for Virtual Machine Fault Tolerance. In: Li J., Chandra A., Guo T., Cai Y. (Ed.), Proceedings - 2018 IEEE International Conference on Cloud Engineering, IC2E 2018: . Paper presented at 2018 IEEE International Conference on Cloud Engineering (IC2E 2018), 17–20 April 2018, Orlando, Florida, USA (pp. 12-22). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Hybrid Adaptive Checkpointing for Virtual Machine Fault Tolerance
Visa övriga...
2018 (Engelska)Ingår i: Proceedings - 2018 IEEE International Conference on Cloud Engineering, IC2E 2018 / [ed] Li J., Chandra A., Guo T., Cai Y., Institute of Electrical and Electronics Engineers (IEEE), 2018, s. 12-22Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Active Virtual Machine (VM) replication is an application independent and cost-efficient mechanism for high availability and fault tolerance, with several recently proposed implementations based on checkpointing. However, these methods may suffer from large impacts on application latency, excessive resource usage overheads, and/or unpredictable behavior for varying workloads. To address these problems, we propose a hybrid approach through a Proportional-Integral (PI) controller to dynamically switch between periodic and on-demand check-pointing. Our mechanism automatically selects the method that minimizes application downtime by adapting itself to changes in workload characteristics. The implementation is based on modifications to QEMU, LibVirt, and OpenStack, to seamlessly provide fault tolerant VM provisioning and to enable the controller to dynamically select the best checkpointing mode. Our evaluation is based on experiments with a video streaming application, an e-commerce benchmark, and a software development tool. The experiments demonstrate that our adaptive hybrid approach improves both application availability and resource usage compared to static selection of a checkpointing method, with application performance gains and neglectable overheads.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2018
Nyckelord
Fault Tolerance, Resource Management, Checkpoint, COLO, Control Theory
Nationell ämneskategori
Datorsystem
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-152033 (URN)10.1109/IC2E.2018.00023 (DOI)2-s2.0-85048315473 (Scopus ID)978-1-5386-5009-7 (ISBN)978-1-5386-5008-0 (ISBN)
Konferens
2018 IEEE International Conference on Cloud Engineering (IC2E 2018), 17–20 April 2018, Orlando, Florida, USA
Tillgänglig från: 2018-09-24 Skapad: 2018-09-24 Senast uppdaterad: 2019-01-24Bibliografiskt granskad
Tesfatsion, S. K., Wadbro, E. & Tordsson, J. (2018). PerfGreen: Performance and Energy Aware Resource Provisioning for Heterogeneous Clouds. In: 2018 IEEE International Conference on Autonomic Computing (ICAC): . Paper presented at 15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018), Trento, ITALY, SEP 03-07, 2018 (pp. 81-90).
Öppna denna publikation i ny flik eller fönster >>PerfGreen: Performance and Energy Aware Resource Provisioning for Heterogeneous Clouds
2018 (Engelska)Ingår i: 2018 IEEE International Conference on Autonomic Computing (ICAC), 2018, s. 81-90Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Improving energy efficiency in a cloud environment is challenging because of poor energy proportionality, low resource utilization, interference as well as workload, application, and hardware dynamism. In this paper we present PerfGreen, a dynamic auto-tuning resource management system for improving energy efficiency with minimal performance impact in heterogeneous clouds. PerfGreen achieves this through a combination of admission control, scheduling, and online resource allocation methods with performance isolation and application priority techniques. Scheduling in PerfGreen is energy aware and power management capabilities such as CPU frequency adaptation and hard CPU power limiting are exploited. CPU scaling is combined with performance isolation techniques, including CPU pinning and quota enforcement, for prioritized virtual machines to improve energy efficiency. An evaluation based on our prototype implementation shows that PerfGreen with its energy-aware scheduler and resource allocator on average reduces energy usage by 53%, improves performance per watt by 64%, and server density by 25% while keeping performance deviations to a minimum.

Serie
Proceedings of the International Conference on Autonomic Computing, ISSN 2474-0756
Nationell ämneskategori
Datorsystem
Identifikatorer
urn:nbn:se:umu:diva-145925 (URN)10.1109/ICAC.2018.00018 (DOI)000450120900009 ()
Konferens
15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018), Trento, ITALY, SEP 03-07, 2018
Tillgänglig från: 2018-03-22 Skapad: 2018-03-22 Senast uppdaterad: 2019-01-07Bibliografiskt granskad
Kostentinos Tesfatsion, S., Proaño, J., Tomás, L., Caminero, B., Carrión, C. & Tordsson, J. (2018). Power and Performance Optimization in FPGA-accelerated Clouds. Concurrency and Computation, 30(18), Article ID e4526.
Öppna denna publikation i ny flik eller fönster >>Power and Performance Optimization in FPGA-accelerated Clouds
Visa övriga...
2018 (Engelska)Ingår i: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 30, nr 18, artikel-id e4526Artikel i tidskrift (Övrigt vetenskapligt) Published
Abstract [en]

Energy management has become increasingly necessary in data centers to address all energy-related costs, including capital costs, operating expenses, and environmental impacts. Heterogeneous systems with mixed hardware architectures provide both throughput and processing efficiency for different specialized application types and thus have a potential for significant energy savings. However, the presence of multiple and different processing elements increases the complexity of resource assignment. In this paper, we propose a system for efficient resource management in heterogeneous clouds. The proposed approach maps applications' requirement to different resources reducing power usage with minimum impact on performance. A technique that combines the scheduling of custom hardware accelerators, in our case, Field-Programmable Gate Arrays (FPGAs) and optimized resource allocation technique for commodity servers, is proposed. We consider an energy-aware scheduling technique that uses both the applications' performance and their deadlines to control the assignment of FPGAs to applications that would consume the most energy. Once the scheduler has performed the mapping between a VM and an FPGA, an optimizer handles the remaining VMs in the server, using vertical scaling and CPU frequency adaptation to reduce energy consumption while maintaining the required performance. Our evaluation using interactive and data-intensive applications compare the effectiveness of the proposed solution in energy savings as well as maintaining applications performance, obtaining up to a 32% improvement in the performance-energy ratio on a mix of multimedia and e-commerce applications.

Ort, förlag, år, upplaga, sidor
John Wiley & Sons, 2018
Nyckelord
cloud computing, energy efficiency, FPGA-aware
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-121092 (URN)10.1002/cpe.4526 (DOI)000442575600010 ()
Forskningsfinansiär
Vetenskapsrådet
Tillgänglig från: 2016-05-26 Skapad: 2016-05-26 Senast uppdaterad: 2019-01-15Bibliografiskt granskad
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. Umeå: Umeå universitet
Öppna denna publikation i ny flik eller fönster >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (Engelska)Rapport (Övrigt vetenskapligt)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoTs) applications in MECs. The scheduler at the higher level runs periodically – monitors system state and the performance of applications – and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application’s performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example.

We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå universitet, 2018. s. 28
Serie
Report / UMINF, ISSN 0348-0542 ; 18.11
Nyckelord
Edge/Fog Computing, Hierarchical Resource Allocation, IoTs, Mobile Edge Clouds
Nationell ämneskategori
Datorsystem
Forskningsämne
datorteknik
Identifikatorer
urn:nbn:se:umu:diva-151455 (URN)
Tillgänglig från: 2018-09-04 Skapad: 2018-09-04 Senast uppdaterad: 2018-09-07Bibliografiskt granskad
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. In: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC): . Paper presented at 37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018. IEEE
Öppna denna publikation i ny flik eller fönster >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (Engelska)Ingår i: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC), IEEE, 2018Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoT) applications in MECs. The scheduler at the higher level runs periodically - monitors system state and the performance of applications - and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application's performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example. We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Ort, förlag, år, upplaga, sidor
IEEE, 2018
Serie
IEEE International Performance Computing and Communications Conference (IPCCC), ISSN 1097-2641
Nationell ämneskategori
Datorsystem Datorteknik
Identifikatorer
urn:nbn:se:umu:diva-160322 (URN)10.1109/PCCC.2018.8711075 (DOI)000469326500052 ()978-1-5386-6808-5 (ISBN)978-1-5386-6807-8 (ISBN)978-1-5386-6809-2 (ISBN)
Konferens
37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018
Forskningsfinansiär
EU, Horisont 2020, ICT30
Tillgänglig från: 2019-06-17 Skapad: 2019-06-17 Senast uppdaterad: 2019-06-17Bibliografiskt granskad
Tesfatsion, S. K., Klein, C. & Tordsson, J. (2018). Virtualization Techniques Compared: Performance, Resource, and Power Usage Overheads in Clouds. Paper presented at ACM/SPEC Internation Conference on Performance Engineering (ICPE).
Öppna denna publikation i ny flik eller fönster >>Virtualization Techniques Compared: Performance, Resource, and Power Usage Overheads in Clouds
2018 (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

Virtualization solutions based on hypervisors or containers are enabling technologies

for scalable, flexible, and cost-effective resource sharing. As the fundamental

limitations of each technology are yet to be understood, they need to be regularly

reevaluated to better understand the trade-off provided by latest technological advances.

This paper presents an in-depth quantitative analysis of virtualization

overheads in these two groups of systems and their gaps relative to native environments

based on a diverse set of workloads that stress CPU, memory, storage,

and networking resources. KVM and XEN are used to represent hypervisor-based

virtualization, and LXC and Docker for container-based platforms. The systems

were evaluated with respect to several cloud resource management dimensions including

performance, isolation, resource usage, energy efficiency, start-up time,

and density. Our study is useful both to practitioners to understand the current

state of the technology in order to make the right decision in the selection, operation

and/or design of platforms and to scholars to illustrate how these technologies

evolved over time.

Nationell ämneskategori
Datorsystem
Identifikatorer
urn:nbn:se:umu:diva-145924 (URN)
Konferens
ACM/SPEC Internation Conference on Performance Engineering (ICPE)
Tillgänglig från: 2018-03-22 Skapad: 2018-03-22 Senast uppdaterad: 2018-06-09
Lorido-Botran, T., Huerta, S., Tomás, L., Tordsson, J. & Sanz, B. (2017). An unsupervised approach to online noisy-neighbor detection in cloud data centers. Expert systems with applications, 89, 188-204
Öppna denna publikation i ny flik eller fönster >>An unsupervised approach to online noisy-neighbor detection in cloud data centers
Visa övriga...
2017 (Engelska)Ingår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 89, s. 188-204Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Resource sharing is an inherent characteristic of cloud data centers. Virtual Machines (VMs) and/or Containers that are co-located in the same physical server often compete for resources leading to interference. The noisy neighbor’s effect refers to an anomaly caused by a VM/container limiting resources accessed by another one. Our main contribution is an online, lightweight and application-agnostic solution for anomaly detection, that follows an unsupervised approach. It is based on comparing models for different lags: Dirichlet Process Gaussian Mixture Models to characterize the resource usage profile of the application, and distance measures to score the similarity among models. An alarm is raised when there is an abrupt change in short-term lag (i.e. high distance score for short-term models), while the long-term state remains constant. We test the algorithm for different cloud workloads: websites, periodic batch applications, Spark-based applications, and Memcached server. We are able to detect anomalies in the CPU and memory resource usage with up to 82–96% accuracy (recall) depending on the scenario. Compared to other baseline methods, our approach is able to detect anomalies successfully, while raising low number of false positives, even in the case of applications with unusual normal behavior (e.g. periodic). Experiments show that our proposed algorithm is a lightweight and effective solution to detect noisy neighbor effect without any historical info about the application, that could also be potentially applied to other kind of anomalies.

Ort, förlag, år, upplaga, sidor
Elsevier, 2017
Nyckelord
Anomaly detection, Virtual machine, Cloud computing, DPGMM, Noisy-neighbor effect, Similarity distances
Nationell ämneskategori
Annan data- och informationsvetenskap
Identifikatorer
urn:nbn:se:umu:diva-138402 (URN)10.1016/j.eswa.2017.07.038 (DOI)000411420200016 ()
Tillgänglig från: 2017-08-22 Skapad: 2017-08-22 Senast uppdaterad: 2018-06-09Bibliografiskt granskad
Tärneberg, W., Papadopoulos, A. V., Mehta, A., Tordsson, J. & Kihl, M. (2017). Distributed Approach to the Holistic Resource Management of a Mobile Cloud Network. In: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC): . Paper presented at 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 14-15 May 2017, Madrid (pp. 51-60).
Öppna denna publikation i ny flik eller fönster >>Distributed Approach to the Holistic Resource Management of a Mobile Cloud Network
Visa övriga...
2017 (Engelska)Ingår i: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 2017, s. 51-60Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

The Mobile Cloud Network is an emerging cost and capacity heterogeneous distributed cloud topological paradigm that aims to remedy the application performance constraints imposed by centralised cloud infrastructures. A centralised cloud infrastructure and the adjoining Telecom network will struggle to accommodate the exploding amount of traffic generated by forthcoming highly interactive applications. Cost effectively managing a Mobile Cloud Network computing infrastructure while meeting individual application's performance goals is non-trivial and is at the core of our contribution. Due to the scale of a Mobile Cloud Network, a centralised approach is infeasible. Therefore, in this paper a distributed algorithm that addresses these challenges is presented. The presented approach works towards meeting individual application's performance objectives, constricting system-wide operational cost, and mitigating resource usage skewness. The presented distributed algorithm does so by iteratively and independently acting on the objectives of each component with a common heuristic objective function. Systematic evaluations reveal that the presented algorithm quickly converges and performs near optimal in terms of system-wide operational cost and application performance, and significantly outperforms similar naïve and random methods.

Nationell ämneskategori
Kommunikationssystem
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-145491 (URN)10.1109/ICFEC.2017.10 (DOI)000426944700006 ()978-1-5090-3047-7 (ISBN)
Konferens
2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 14-15 May 2017, Madrid
Tillgänglig från: 2018-03-07 Skapad: 2018-03-07 Senast uppdaterad: 2018-06-09Bibliografiskt granskad
Tärneberg, W., Mehta, A., Wadbro, E., Tordsson, J., Eker, J., Kihl, M. & Elmroth, E. (2017). Dynamic application placement in the Mobile Cloud Network. Future generations computer systems, 70, 163-177
Öppna denna publikation i ny flik eller fönster >>Dynamic application placement in the Mobile Cloud Network
Visa övriga...
2017 (Engelska)Ingår i: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 70, s. 163-177Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

To meet the challenges of consistent performance, low communication latency, and a high degree of user mobility, cloud and Telecom infrastructure vendors and operators foresee a Mobile Cloud Network that incorporates public cloud infrastructures with cloud augmented Telecom nodes in forthcoming mobile access networks. A Mobile Cloud Network is composed of distributed cost- and capacityheterogeneous resources that host applications that in turn are subject to a spatially and quantitatively rapidly changing demand. Such an infrastructure requires a holistic management approach that ensures that the resident applications’ performance requirements are met while sustainably supported by the underlying infrastructure. The contribution of this paper is three-fold. Firstly, this paper contributes with a model that captures the cost- and capacity-heterogeneity of a Mobile Cloud Network infrastructure. The model bridges the Mobile Edge Computing and Distributed Cloud paradigms by modelling multiple tiers of resources across the network and serves not just mobile devices but any client beyond and within the network. A set of resource management challenges is presented based on this model. Secondly, an algorithm that holistically and optimally solves these challenges is proposed. The algorithm is formulated as an application placement method that incorporates aspects of network link capacity, desired user latency and user mobility, as well as data centre resource utilisation and server provisioning costs. Thirdly, to address scalability, a tractable locally optimal algorithm is presented. The evaluation demonstrates that the placement algorithm significantly improves latency, resource utilisation skewness while minimising the operational cost of the system. Additionally, the proposed model and evaluation method demonstrate the viability of dynamic resource management of the Mobile Cloud Network and the need for accommodating rapidly mobile demand in a holistic manner.

Nyckelord
Cloud computing, Distributed, Edge, Graph, Infrastructure, Mobile, Mobile Cloud, Modelling, Networks, Optimisation, Placement, Telco-cloud
Nationell ämneskategori
Kommunikationssystem Datorteknik
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-129247 (URN)10.1016/j.future.2016.06.021 (DOI)000394401800015 ()2-s2.0-85006970632 (Scopus ID)
Tillgänglig från: 2016-12-21 Skapad: 2016-12-21 Senast uppdaterad: 2018-09-04Bibliografiskt granskad
Organisationer

Sök vidare i DiVA

Visa alla publikationer