Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Townend, Paul
Publications (10 of 10) Show all publications
Patel, Y. S. & Townend, P. (2024). A stable matching approach to energy efficient and sustainable serverless scheduling for the green cloud continuum. In: Proceedings: 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024. Paper presented at 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Shanghai, China, 15-18 July, 2024. (pp. 25-35). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A stable matching approach to energy efficient and sustainable serverless scheduling for the green cloud continuum
2024 (English)In: Proceedings: 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 25-35Conference paper, Published paper (Refereed)
Abstract [en]

Cloud infrastructures are evolving from centralised systems to geographically distributed federations of edge devices, fog nodes, and clouds - often known as the Cloud-Edge Continuum. Continuum systems are dynamic, often massive in scale, and feature disparate infrastructure providers and platforms; this greatly increase the complexity of developing and managing applications. The Serverless paradigm shows the potential to greatly simplify the process of building Continuum applications - however, current scheduling mechanisms for Serverless Continuum platforms pay little attention to reducing the energy consumption and improving the sustainability of function execution. This is a significant omission, made worse as computing nodes within a Continuum may be powered by renewable energy sources that are intermittent and unpredictable, making low-powered and bottleneck nodes unavailable.There is great opportunity to design a decentralized energy management scheme for scheduling Serverless functions that takes advantage of the different layers of the Continuum, such as IoT devices located at the Edge, on-premises clusters closer to the data sources, or directly on large Cloud infrastructures. To achieve this, we formally model a green energy-aware Serverless workload scheduling problem for the multi-provider Cloud-Edge Continuum. We then design stable matching based technique for decentralized energy management (utilising a distributed controller) that considers the availability of green energy nodes and the QoS requirements of Serverless functions. We prove the complexity, stability and termination of the proposed heuristic algorithm, and also compare its performance with baseline scheduling techniques.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Cloud-Edge Continuum, Function-as-a-Service, Matching, Renewable energy, Scheduling, Serverless Computing
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-231522 (URN)10.1109/SOSE62363.2024.00010 (DOI)001327894000004 ()2-s2.0-85207640233 (Scopus ID)9798331539580 (ISBN)
Conference
18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Shanghai, China, 15-18 July, 2024.
Funder
The Kempe FoundationsEU, Horizon Europe, 101092711)
Available from: 2024-11-22 Created: 2024-11-22 Last updated: 2024-11-22Bibliographically approved
Lalaguna, A., Townend, P., Ojaghi, B. & Vázquez, C. (2024). Cooperative and connected mobility services in the cloud-edge continuum with function as a service technology and AI-enabled orchestration. In: Ahmed Bendahmane;Abdelaaziz El Hibaoui; Mohamed Essaaidi (Ed.), Proceedings of 2024 1st Edition of the Mediterranean Smart Cities Conference, MSCC 2024: . Paper presented at Mediterranean Smart Cities Conference (MSCC 2024), 2-4 May, 2024, Martil, Morocco (pp. 1-6). IEEE
Open this publication in new window or tab >>Cooperative and connected mobility services in the cloud-edge continuum with function as a service technology and AI-enabled orchestration
2024 (English)In: Proceedings of 2024 1st Edition of the Mediterranean Smart Cities Conference, MSCC 2024 / [ed] Ahmed Bendahmane;Abdelaaziz El Hibaoui; Mohamed Essaaidi, IEEE, 2024, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novelty system to manage Traffic Priority at city intersections by means of our Mobility-Hub (M-Hub), a next-generation Traffic Light Controller that leverages the power of cloud-edge continuum computing, Digital Twin, and Cellular Vehicle-to-Everything (C-V2X) technologies to transform traffic management into a dynamic and intelligent system. M-Hub acts as an open-edge computing platform, enabling real-time data processing, 3rd parties containerized applications and decision-making at the network edge. COGNIT is an open-source cloud-edge continuum framework, that offers many improvements for next-generation Intelligent Transportation Systems (ITS). The continuum allows for the integration of diverse data sources, including vehicular data from C-V2X communication, real-time traffic information from detectors or cameras, and other environmental data, to seamlessly generate Digital Twins in the ACISA smart mobility platform, SATURNO. By combining this data with advanced traffic optimization algorithms implemented in the COGNIT infrastructure, M-Hub can dynamically adjust traffic signal timings, optimize traffic flow, and reduce congestion with optimal use of computational resources. M-Hub has the potential to revolutionize urban mobility, enhancing safety, improving efficiency, and reducing environmental impact.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE InternationalConference on System of Systems Engineering, E-ISSN 2835-3161
Keywords
5G, AI, Automated Mobility services, C-V2X, Cloud-Edge Continuum, Connected, Cooperative, Digital Twin, Intelligent Transport Systems, Urban Mobility
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-231293 (URN)10.1109/MSCC62288.2024.10697004 (DOI)2-s2.0-85207056263 (Scopus ID)9798350374001 (ISBN)
Conference
Mediterranean Smart Cities Conference (MSCC 2024), 2-4 May, 2024, Martil, Morocco
Available from: 2024-11-12 Created: 2024-11-12 Last updated: 2024-11-12Bibliographically approved
Patel, Y. S., Townend, P., Singh, A. & Östberg, P.-O. (2024). Modeling the green cloud continuum: integrating energy considerations into cloud-edge models. Cluster Computing, 27(4), 4095-4125
Open this publication in new window or tab >>Modeling the green cloud continuum: integrating energy considerations into cloud-edge models
2024 (English)In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543, Vol. 27, no 4, p. 4095-4125Article in journal (Refereed) Published
Abstract [en]

The energy consumption of Cloud–Edge systems is becoming a critical concern economically, environmentally, and societally; some studies suggest data centers and networks will collectively consume 18% of global electrical power by 2030. New methods are needed to mitigate this consumption, e.g. energy-aware workload scheduling, improved usage of renewable energy sources, etc. These schemes need to understand the interaction between energy considerations and Cloud–Edge components. Model-based approaches are an effective way to do this; however, current theoretical Cloud–Edge models are limited, and few consider energy factors. This paper analyses all relevant models proposed between 2016 and 2023, discovers key omissions, and identifies the major energy considerations that need to be addressed for Green Cloud–Edge systems (including interaction with energy providers). We investigate how these can be integrated into existing and aggregated models, and conclude with the high-level architecture of our proposed solution to integrate energy and Cloud–Edge models together.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Models, Green, Cloud–Edge, Renewable energy, Resource management, Continuum
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-223134 (URN)10.1007/s10586-024-04383-w (DOI)001199099600002 ()2-s2.0-85190304126 (Scopus ID)
Funder
The Kempe FoundationsEU, Horizon Europe, 101092711EU, Horizon 2020, 101000165
Available from: 2024-04-10 Created: 2024-04-10 Last updated: 2024-08-20Bibliographically approved
Kidane, L., Townend, P., Metsch, T. & Elmroth, E. (2023). Automated hyperparameter tuning for adaptive cloud workload prediction. In: UCC '23: Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing. Paper presented at CC '23: IEEE/ACM 16th International Conference on Utility and Cloud Computing, Taormina (Messina), Italy, December 4-7, 2023. New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Automated hyperparameter tuning for adaptive cloud workload prediction
2023 (English)In: UCC '23: Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing, New York: Association for Computing Machinery (ACM), 2023Conference paper, Published paper (Refereed)
Abstract [en]

Efficient workload prediction is essential for enabling timely resource provisioning in cloud computing environments. However, achieving accurate predictions, ensuring adaptability to changing conditions, and minimizing computation overhead pose significant challenges for workload prediction models. Furthermore, the continuous streaming nature of workload metrics requires careful consideration when applying machine learning and data mining algorithms, as manual hyperparameter optimization can be time-consuming and suboptimal. We propose an automated parameter tuning and adaptation approach for workload prediction models and concept drift detection algorithms utilized in predicting future workload. Our method leverages a pre-built knowledge-base based on historical data statistical features, enabling automatic adjustment of model weights and concept drift detection parameters. Additionally, model adaptation is facilitated through a transfer learning approach. We evaluate the effectiveness of our automated approach by comparing it with static approaches using synthetic and real-world datasets. By automating the parameter tuning process and integrating concept drift detection, in our experiments the proposed method enhances the accuracy and efficiency of workload prediction models by 50%.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2023
Keywords
Cloud computing, Hyperparameter optimization, Workload prediction, Concept drift, Data mining
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-223451 (URN)10.1145/3603166.3632244 (DOI)2-s2.0-85191659681 (Scopus ID)979-8-4007-0234-1 (ISBN)
Conference
CC '23: IEEE/ACM 16th International Conference on Utility and Cloud Computing, Taormina (Messina), Italy, December 4-7, 2023
Funder
Knut and Alice Wallenberg Foundation, 2019.0352eSSENCE - An eScience Collaboration
Available from: 2024-04-16 Created: 2024-04-16 Last updated: 2024-05-13Bibliographically approved
Townend, P., Martí, A. P., De La Iglesia, I., Matskanis, N., Ohlson Timoudas, T., Hallmann, T., . . . Abdou, M. (2023). COGNIT: challenges and vision for a serverless and multi-provider cognitive cloud-edge continuum. In: 2023 IEEE International Conference on Edge Computing and Communications (EDGE): . Paper presented at 2023 IEEE International Conference on Edge Computing and Communications (EDGE), Chicago, Illinois, USA, July 2-8, 2023 (pp. 12-22). IEEE
Open this publication in new window or tab >>COGNIT: challenges and vision for a serverless and multi-provider cognitive cloud-edge continuum
Show others...
2023 (English)In: 2023 IEEE International Conference on Edge Computing and Communications (EDGE), IEEE, 2023, p. 12-22Conference paper, Published paper (Refereed)
Abstract [en]

Use of the serverless paradigm in cloud application development is growing rapidly, primarily driven by its promise to free developers from the responsibility of provisioning, operating, and scaling the underlying infrastructure. However, modern cloud-edge infrastructures are characterized by large numbers of disparate providers, constrained resource devices, platform heterogeneity, infrastructural dynamicity, and the need to orchestrate geographically distributed nodes and devices over public networks. This presents significant management complexity that must be addressed if serverless technologies are to be used in production systems. This position paper introduces COGNIT, a major new European initiative aiming to integrate AI technology into cloud-edge management systems to create a Cognitive Cloud reference framework and associated tools for serverless computing at the edge. COGNIT aims to: 1) support an innovative new serverless paradigm for edge application management and enhanced digital sovereignty for users and developers; 2) enable on-demand deployment of large-scale, highly distributed and self-adaptive serverless environments using existing cloud resources; 3) optimize data placement according to changes in energy efficiency heuristics and application demands and behavior; 4) enable secure and trusted execution of serverless runtimes. We identify and discuss seven research challenges related to the integration of serverless technologies with multi-provider Edge infrastructures and present our vision for how these challenges can be solved. We introduce a high-level view of our reference architecture for serverless cloud-edge continuum systems, and detail four motivating real-world use cases that will be used for validation, drawing from domains within Smart Cities, Agriculture and Environment, Energy, and Cybersecurity.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Edge Computing, E-ISSN 2767-9918
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-214140 (URN)10.1109/EDGE60047.2023.00015 (DOI)001063201700002 ()2-s2.0-85173547015 (Scopus ID)979-8-3503-0483-1 (ISBN)979-8-3503-0484-8 (ISBN)
Conference
2023 IEEE International Conference on Edge Computing and Communications (EDGE), Chicago, Illinois, USA, July 2-8, 2023
Funder
EU, Horizon Europe, 101092711
Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2024-07-02Bibliographically approved
Patel, Y. S., Townend, P. & Östberg, P.-O. (2023). Formal models for the energy-aware cloud-edge computing continuum: analysis and challenges. In: Lisa O'Conner (Ed.), 2023 IEEE international conference on service-oriented system engineering (SOSE): proceedings. Paper presented at 2023 IEEE International Conference on Service-Oriented System Engineering (SOSE), Athens, Greece, July 17-20, 2023 (pp. 48-59). IEEE
Open this publication in new window or tab >>Formal models for the energy-aware cloud-edge computing continuum: analysis and challenges
2023 (English)In: 2023 IEEE international conference on service-oriented system engineering (SOSE): proceedings / [ed] Lisa O'Conner, IEEE, 2023, p. 48-59Conference paper, Published paper (Refereed)
Abstract [en]

Cloud infrastructures are rapidly evolving from centralised systems to geographically distributed federations of edge devices, fog nodes, and clouds. These federations (often referred to as the Cloud-Edge Continuum) are the foundation upon which most modern digital systems depend, and consume enormous amounts of energy. This consumption is becoming a critical issue as society's energy challenges grow, and is a great concern for power grids which must balance the needs of clouds against other users. The Continuum is highly dynamic, mobile, and complex; new methods to improve energy efficiency must be based on formal scientific models that identify and take into account a huge range of heterogeneous components, interactions, stochastic properties, and (potentially contradictory) service-level agreements and stakeholder objectives. Currently, few formal models of federated Cloud-Edge systems exist - and none adequately represent and integrate energy considerations (e.g. multiple providers, renewable energy sources, pricing, and the need to balance consumption over large-areas with other non-Cloud consumers, etc.). This paper conducts a systematic analysis of current approaches to modelling Cloud, Cloud-Edge, and federated Continuum systems with an emphasis on the integration of energy considerations. We identify key omissions in the literature, and propose an initial high-level architecture and approach to begin addressing these - with the ultimate goal to develop a set of integrated models that include data centres, edge devices, fog nodes, energy providers, software workloads, end users, and stakeholder requirements and objectives. We conclude by highlighting the key research challenges that must be addressed to enable meaningful energy-aware Cloud-Edge Continuum modelling and simulation.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Symposium on Service-Oriented System Engineering, ISSN 2640-8228, E-ISSN 2642-6587
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-214709 (URN)10.1109/SOSE58276.2023.00012 (DOI)2-s2.0-85174930280 (Scopus ID)979-8-3503-2239-2 (ISBN)979-8-3503-2240-8 (ISBN)
Conference
2023 IEEE International Conference on Service-Oriented System Engineering (SOSE), Athens, Greece, July 17-20, 2023
Funder
The Kempe FoundationsEU, Horizon EuropeEU, Horizon 2020
Available from: 2023-09-25 Created: 2023-09-25 Last updated: 2023-11-02Bibliographically approved
Saleh Sedghpour, M. R. & Townend, P. (2022). Service mesh and eBPF-powered microservices: a survey and future directions. In: 2022 IEEE International Conference on Service-Oriented System Engineering (SOSE): . Paper presented at IEEE SOSE 2022, 16th International Conference on Service-Oriented System Engineering (SOSE), San Fransisco, USA, August 15-18, 2022 (pp. 176-184). IEEE
Open this publication in new window or tab >>Service mesh and eBPF-powered microservices: a survey and future directions
2022 (English)In: 2022 IEEE International Conference on Service-Oriented System Engineering (SOSE), IEEE, 2022, p. 176-184Conference paper, Published paper (Refereed)
Abstract [en]

Modern software development practice has seen a profound shift in architectural design, moving from monolithic approaches to distributed, microservice-based architectures. This allows for much simpler and faster application orchestration and management, especially in cloud-based systems, with the result being that orchestration systems themselves are becoming a key focus of computing research.

Orchestration system research addresses many different subject areas, including scheduling, automation, and security. However, the key characteristic that is common throughout is the complex and dynamic nature of distributed, multi-tenant cloud-based microservice systems that must be orchestrated. This complexity has led to many challenges in areas such as inter-service communication, observability, reliability, single cluster to multi-cluster, hybrid environments, and multi-tenancy.

The concept of service meshes has been introduced to handle this complexity. In essence, a service mesh is an infrastructure layer built directly into the microservices - or the nodes of orchestrators - as a set of configurable proxies that are responsible for the management, observability, and security of microservices.

Service meshes aim to be a full networking solution for microservices; however, they also introduce overhead into a system - this can be significant for low-powered edge devices, as service mesh proxies work in user space and are responsible for processing the incoming and outgoing traffic of each service. To mitigate performance issues caused by these proxies, the industry is pushing the boundaries of monitoring and security to kernel space by employing eBPF for faster and more efficient responses. 

We propose that the movement towards the use of service meshes as a networking solution for most of the required features by industry - combined with their integration with eBPF - is the next key trend in the evolution of microservices. This paper highlights the challenges of this movement, explores its current state, and discusses future opportunities in the context of microservices. 

Place, publisher, year, edition, pages
IEEE, 2022
Series
Proceedings (IEEE International Symposium on Service-Oriented System Engineering), E-ISSN 2642-6587
Keywords
Service-Oriented Computing, Microservice, Service Mesh, eBPF
National Category
Computer Systems
Research subject
Computer Systems
Identifiers
urn:nbn:se:umu:diva-200303 (URN)10.1109/SOSE55356.2022.00027 (DOI)000942754700021 ()2-s2.0-85141439805 (Scopus ID)978-1-6654-7534-1 (ISBN)978-1-6654-7535-8 (ISBN)
Conference
IEEE SOSE 2022, 16th International Conference on Service-Oriented System Engineering (SOSE), San Fransisco, USA, August 15-18, 2022
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2022-10-14 Created: 2022-10-14 Last updated: 2023-09-05Bibliographically approved
Tarneberg, W., Fitzgerald, E., Bhuyan, M. H., Townend, P., Arzen, K.-E., Östberg, P.-O., . . . Kihl, M. (2022). The 6G Computing Continuum (6GCC): Meeting the 6G computing challenges. In: 2022 1st International Conference on 6G Networking, 6GNet 2022: . Paper presented at 1st International Conference on 6G Networking, 6GNet, 6-8 July 2022, Paris, France. IEEE Computer Society
Open this publication in new window or tab >>The 6G Computing Continuum (6GCC): Meeting the 6G computing challenges
Show others...
2022 (English)In: 2022 1st International Conference on 6G Networking, 6GNet 2022, IEEE Computer Society, 2022Conference paper, Published paper (Refereed)
Abstract [en]

6G systems, such as Large Intelligent Surfaces, will require distributed, complex, and coordinated decisions through-out a very heterogeneous and cell free infrastructure. This will require a fundamentally redesigned software infrastructure accompanied by massively distributed and heterogeneous computing resources, vastly different from current wireless networks. To address these challenges, in this paper, we propose and motivate the concept of a 6G Computing Continuum (6GCC) and two research testbeds, to advance the rate and quality of research. 6G Computing Continuum is an end-to-end compute and software platform for realizing large intelligent surfaces and its tenant users and applications. One for addressing the challenges or orchestrating shared computational resources in the wireless domain, implemented on a Large Intelligent Surfaces testbed. Another simulation-based testbed is intended to address scalability and global-scale orchestration challenges.

Place, publisher, year, edition, pages
IEEE Computer Society, 2022
Keywords
6G, Computing at Scale, Computing Continuum, Distributed Orchestration, Large Intelligent Surfaces
National Category
Computer Sciences Computer Systems
Identifiers
urn:nbn:se:umu:diva-203080 (URN)10.1109/6GNet54646.2022.9830459 (DOI)000860313400032 ()2-s2.0-85136138862 (Scopus ID)9781665467636 (ISBN)
Conference
1st International Conference on 6G Networking, 6GNet, 6-8 July 2022, Paris, France
Available from: 2023-01-17 Created: 2023-01-17 Last updated: 2024-07-02Bibliographically approved
Townend, P. & Wo, T. (2022). Welcome message from the general chairs of IEEE JCC 2022. In: Proceedings: 2022 IEEE 13th International Conference on Joint Cloud Computing. Paper presented at 13th IEEE International Conference on Joint Cloud Computing, JCC 2022 (pp. VIII-VIII). Paper presented at 13th IEEE International Conference on Joint Cloud Computing, JCC 2022. IEEE Computer Society
Open this publication in new window or tab >>Welcome message from the general chairs of IEEE JCC 2022
2022 (English)In: Proceedings: 2022 IEEE 13th International Conference on Joint Cloud Computing, IEEE Computer Society, 2022, p. VIII-VIIIChapter in book (Other academic)
Place, publisher, year, edition, pages
IEEE Computer Society, 2022
National Category
Computer Sciences Telecommunications
Identifiers
urn:nbn:se:umu:diva-200871 (URN)10.1109/JCC56315.2022.00005 (DOI)2-s2.0-85140917612 (Scopus ID)9781665462853 (ISBN)
Conference
13th IEEE International Conference on Joint Cloud Computing, JCC 2022
Note

2022 IEEE 13th International Conference on Joint Cloud Computing, San Francisco Bay, USA, 15-18 August, 2022.

Available from: 2022-11-15 Created: 2022-11-15 Last updated: 2022-11-15Bibliographically approved
Kidane, L., Townend, P., Metsch, T. & Elmroth, E. (2022). When and How to Retrain Machine Learning-based Cloud Management Systems. In: 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW): . Paper presented at 2022 IEEE International Parallel and Distributed Processing Symposium, 30 May 2022-03 June 2022, Lyon, France (pp. 688-698). IEEE
Open this publication in new window or tab >>When and How to Retrain Machine Learning-based Cloud Management Systems
2022 (English)In: 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), IEEE, 2022, p. 688-698Conference paper, Published paper (Refereed)
Abstract [en]

Cloud management systems increasingly rely on machine learning (ML) models to predict incoming workload rates, load, and other system behaviors for efficient dynamic resource management. Current state-of-the-art prediction models demonstrate high accuracy, but assume that data patterns remain stable. However, in production use, systems may face hardware upgrades, changes in user behavior etc. that lead to concept drifts - significant changes in characteristics of data streams over time. To mitigate prediction deterioration, ML models need to be updated - but questions of when and how to best retrain these models are unsolved in the context of cloud management. We present a pilot study that address these questions for one of the most common models for adaptive prediction - Long Short Term Memory (LSTM) - using synthetic and real-world workload data. Our analysis of when to retrain explores approaches for detecting when retraining is required using both concept drift detection and prediction error thresholds, and at what point of retraining should actually take place. Our analysis of how to retrain focuses on the data required for retraining, and what proportion should be taken from before and after the need for retraining is detected. We present initial results that indicate that retraining of existing models can achieve prediction accuracy close to that of newly trained models but for much less cost, and present initial advice for how to provide cloud management systems with support for automatic retraining of ML-based methods.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
cloud computing, cloud workload prediction, concept drift, machine learning, time series prediction
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-198541 (URN)10.1109/IPDPSW55747.2022.00120 (DOI)000855041000086 ()2-s2.0-85136190866 (Scopus ID)9781665497473 (ISBN)9781665497480 (ISBN)
Conference
2022 IEEE International Parallel and Distributed Processing Symposium, 30 May 2022-03 June 2022, Lyon, France
Funder
Knut and Alice Wallenberg FoundationeSSENCE - An eScience Collaboration
Available from: 2022-08-09 Created: 2022-08-09 Last updated: 2023-09-05Bibliographically approved
Organisations

Search in DiVA

Show all publications