Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Townend, Paul
Publications (10 of 21) Show all publications
Wen, Y., Townend, P., Östberg, P.-O., Souza, A. & Courageux-Sudan, C. (2025). A decentralized microservice scheduling approach using service mesh in cloud-edge systems. In: Lisa O’Conner (Ed.), 2025 IEEE international conference on joint cloud computing: proceedings. Paper presented at IEEE JCC 2025 – The 16th IEEE International Conference on JointCloud Computing (part of IEEE CISOSE 2025), Tuscon, Arizona, USA, July 21-24, 2025 (pp. 52-60). IEEE Computer Society
Open this publication in new window or tab >>A decentralized microservice scheduling approach using service mesh in cloud-edge systems
Show others...
2025 (English)In: 2025 IEEE international conference on joint cloud computing: proceedings / [ed] Lisa O’Conner, IEEE Computer Society, 2025, p. 52-60Conference paper, Published paper (Refereed)
Abstract [en]

As microservice-based systems scale across thecloud-edge continuum, traditional centralized scheduling mecha-nisms increasingly struggle with latency, coordination overhead,and fault tolerance. This paper presents a new architectural di-rection: leveraging service mesh sidecar proxies as decentralized,in-situ schedulers to enable scalable, low-latency coordination inlarge-scale, cloud-native environments. We propose embeddinglightweight, autonomous scheduling logic into each sidecar, allow-ing scheduling decisions to be made locally without centralizedcontrol. This approach leverages the growing maturity of servicemesh infrastructures, which support programmable distributedtraffic management. We describe the design of such an archi-tecture and present initial results demonstrating its scalabilitypotential in terms of response time and latency under varyingrequest rates. Rather than delivering a finalized scheduling algo-rithm, this paper presents a system-level architectural directionand preliminary evidence to support its scalability potential.

Place, publisher, year, edition, pages
IEEE Computer Society, 2025
Keywords
Microservice-based systems, Service mesh, Decentralized scheduling, Sidecar proxy, Scalability, Latency, Distributed systems
National Category
Computer Sciences Computer Systems
Research subject
Computer Science; Computer Systems
Identifiers
urn:nbn:se:umu:diva-241674 (URN)10.1109/JCC67032.2025.00012 (DOI)2-s2.0-105016245878 (Scopus ID)979-8-3315-8915-8 (ISBN)
Conference
IEEE JCC 2025 – The 16th IEEE International Conference on JointCloud Computing (part of IEEE CISOSE 2025), Tuscon, Arizona, USA, July 21-24, 2025
Available from: 2025-06-28 Created: 2025-06-28 Last updated: 2025-10-14Bibliographically approved
Kidane, L., Townend, P., Metsch, T. & Elmroth, E. (2025). Balancing compression and prediction: a hybrid autoencoder-LSTM framework for cloud workloads. In: BDCAT 2025 - IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, Co Located Conference UCC 2025: . Paper presented at 12th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, BDCAT 2025, Nantes, France, 1-4 December, 2025.. Association for Computing Machinery (ACM), Article ID 10.
Open this publication in new window or tab >>Balancing compression and prediction: a hybrid autoencoder-LSTM framework for cloud workloads
2025 (English)In: BDCAT 2025 - IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, Co Located Conference UCC 2025, Association for Computing Machinery (ACM), 2025, article id 10Conference paper, Published paper (Refereed)
Abstract [en]

Accurate future workload prediction is an essential step for proactive resource allocation and efficient provisioning in cloud computing environments. Deep learning strategies have proven successful for this task, but they face challenges due to the high dimensionality of monitoring data, extensive preprocessing requirements, and computational overhead. In this paper, we propose a hybrid framework that integrates autoencoders for workload compression with Long Short-Term Memory (LSTM) networks for time-series forecasting. Unlike prior studies, our approach systematically analyzes the trade-off between compression ratio and predictive accuracy, demonstrating how dimensionality reduction can improve both scalability and robustness. Thereby reducing the computational burden associated with processing massive-scale monitoring data. Experiments conducted on both synthetic and real-world datasets demonstrate that the proposed method achieves up to 60% data compression with minimal reconstruction loss, while also improving prediction accuracy compared to baseline LSTM models. We evaluate the overall performance of the framework using various metrics, including data reduction ratio, prediction accuracy, and the effects of different compression stages on predictive performance. Additionally, we quantify the computational savings in terms of CPU usage, memory footprint, and training/inference times, confirming the framework's feasibility for real-world deployment. These results underscore the potential of integrating compression and prediction to achieve scalable, accurate, and resource-efficient management of cloud workloads.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
Autoencoders, Cloud computing, Data compression, Information extraction, Workload prediction
National Category
Computer Systems Computer Sciences
Identifiers
urn:nbn:se:umu:diva-248586 (URN)10.1145/3773276.3774300 (DOI)2-s2.0-105026855587 (Scopus ID)9798400722868 (ISBN)
Conference
12th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, BDCAT 2025, Nantes, France, 1-4 December, 2025.
Funder
Knut and Alice Wallenberg Foundation, KAW 2019.0352eSSENCE - An eScience Collaboration
Available from: 2026-01-23 Created: 2026-01-23 Last updated: 2026-01-23Bibliographically approved
Patel, Y. S., Choubey, A., Singh, A. & Townend, P. (2025). Decentralized multi-agent reinforcement learning for the green serverless cloud-edge continuum. In: Proceedings - 19th IEEE International Conference on Service-Oriented System Engineering, SOSE 2025: . Paper presented at 19th IEEE International Conference on Service-Oriented System Engineering, SOSE 2025, Tucson, AZ, USA, 21-14 July, 2025. (pp. 108-117). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Decentralized multi-agent reinforcement learning for the green serverless cloud-edge continuum
2025 (English)In: Proceedings - 19th IEEE International Conference on Service-Oriented System Engineering, SOSE 2025, Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 108-117Conference paper, Published paper (Refereed)
Abstract [en]

The Cloud-Edge Continuum systems are inherently complex and massive, often featuring federated multi-provider stakeholders (e.g. cloud/edge service providers, energy providers), heterogeneous platforms, and dynamic infrastructures; this significantly increases the complexity of developing, deploying, and managing applications. The Serverless computing offers a powerful tool to simplify and speed up the Continuum application development. However, existing scheduling mechanisms for Serverless platforms focus primarily on performance metrics such as latency, model accuracy, and throughput, often neglecting critical factors such as energy efficiency and sustainability. This gap is further exacerbated in Continuum environments, where computational nodes may rely on unpredictable and intermittent green energy sources, leading to availability bottlenecks and energy constraints. This work investigates the design of a decentralized green energy-aware approach for scheduling Serverless functions across the Cloud-Edge Continuum. To achieve this, we introduce a formal model of the green energy-aware workload scheduling problem. We then develop a consensus-based upper confidence bound (UCB) approach for cooperative multi-agent reinforcement learning (MARL) that leverages distributed agents to consider energy awareness and quality-of-service (QoS) requirements of different functions into their scheduling decisions. To demonstrate the practicality of our approach, we implement a real-world prototype using a cluster of Raspberry Pis, Cloud servers, Kubernetes, and OpenFaaS. Experimental results show that our approach maximizes the green energy utilization by (44%) and reduces total latency by (25%) compared to the centralized technique, highlighting its energy efficiency, scalability, and overall sustainability in Continuum settings.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Cloud-Edge Continuum, Function-as-a-Service, Green Computing, Multi-Agent Reinforcement Learning, Serverless Computing
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-244578 (URN)10.1109/SOSE67019.2025.00017 (DOI)2-s2.0-105016114830 (Scopus ID)9798331589110 (ISBN)
Conference
19th IEEE International Conference on Service-Oriented System Engineering, SOSE 2025, Tucson, AZ, USA, 21-14 July, 2025.
Funder
The Kempe FoundationsEU, Horizon Europe, 101092711
Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-10Bibliographically approved
Massonet, P., Ponsard, C., Bouhou, M., Lessage, X., Mancini, M., Montero, R. S., . . . Townend, P. (2025). Executing mobile edge functions in the cloud-edge continuum: analyzing threats to location integrity. In: 2025 12th International Conference on Future Internet of Things and Cloud (FiCloud): . Paper presented at 12th International Conference on Future Internet of Things and Cloud, FiCloud 2025, Istanbul, Turkiye, 11-13 August, 2025 (pp. 18-25). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Executing mobile edge functions in the cloud-edge continuum: analyzing threats to location integrity
Show others...
2025 (English)In: 2025 12th International Conference on Future Internet of Things and Cloud (FiCloud), Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 18-25Conference paper, Published paper (Refereed)
Abstract [en]

With the exponential growth of edge devices, the cloud edge continuum provides a natural evolution to the centralised cloud architecture to overcome the bottlenecks created by the growing data that devices generate. Resource constrained edge devices need to be able to offload computational tasks to the cloud edge continuum. Providing resources located at the edge, close to resource-constrained devices, allows devices to offload on demand potentially complex functions with low latency and response time requirements. The COGNIT framework introduces the novel concept of function as a services (FaaS) at the edge and novel AI techniques for cloud-edge management. In this paper, we show how the novel edge FaaS model can be used to offload critical security functions from edge devices and enable them to protect themselves even though they don't have the resources for such protection. The paper's main contribution is to show how the edge FaaS model enables to design multi-layer protection models between edge devices and the cloud-edge continuum AI-based orchestrator. In this model the edge device provides a first layer of defense using application knowledge to protect itself whereas the AI-based orchestrator provides a second layer of defense that is more generic because it does not know much about the edge application. The layered protection model is illustrated and validated on a cybersecurity case study where AI-based anomaly detection is deployed at the edge to secure mobile devices and detect anomalies as early and quickly as possible. This second contribution of the paper shows how continuous security anomaly detection can be designed as multiple functions that are triggered by monitored events to provide continuous detection at the edge for all events.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Series
International Conference on Future Internet of Things and Cloud (FiCloud), E-ISSN 2996-1017
Keywords
Anomaly detection, Edge Computing, FaaS, Mobile Device, Security architecture, Threat modelling
National Category
Computer Sciences Communication Systems
Identifiers
urn:nbn:se:umu:diva-248361 (URN)10.1109/FiCloud66139.2025.00010 (DOI)2-s2.0-105021954266 (Scopus ID)9798331554378 (ISBN)9798331554385 (ISBN)
Conference
12th International Conference on Future Internet of Things and Cloud, FiCloud 2025, Istanbul, Turkiye, 11-13 August, 2025
Funder
EU, Horizon Europe, 101092711
Available from: 2026-01-13 Created: 2026-01-13 Last updated: 2026-01-13Bibliographically approved
Theodoropoulos, T., Patel, Y. S., Zdun, U., Townend, P., Korodanis, I., Makris, A. & Tserpes, K. (2025). GraphOpticon: a global proactive horizontal autoscaler for improved service performance & resource consumption. Future Generation Computer Systems, 174, Article ID 107926.
Open this publication in new window or tab >>GraphOpticon: a global proactive horizontal autoscaler for improved service performance & resource consumption
Show others...
2025 (English)In: Future Generation Computer Systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 174, article id 107926Article in journal (Refereed) Published
Abstract [en]

The increasing complexity of distributed computing environments necessitates efficient resource management strategies to optimize performance and minimize resource consumption. Although proactive horizontal autoscaling dynamically adjusts computational resources based on workload predictions, existing approaches primarily focus on improving workload resource consumption, often neglecting the overhead introduced by the autoscaling system itself. This could have dire ramifications on resource efficiency, since many prior solutions rely on multiple forecasting models per compute node or group of pods, leading to significant resource consumption associated with the autoscaling system. To address this, we propose GraphOpticon, a novel proactive horizontal autoscaling framework that leverages a singular global forecasting model based on Spatiotemporal Graph Neural Networks. The experimental results demonstrate that GraphOpticon is capable of providing improved service performance, and resource consumption (caused by the workloads involved and the autoscaling system itself). As a matter of fact, GraphOpticon manages to consistently outperform other contemporary horizontal autoscaling solutions, such as Kubernetes’ Horizontal Pod Autoscaler, with improvements of 6.62% in median execution time, 7.62% in tail latency, and 6.77% in resource consumption, among others.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Cloud Computing, Green Computing, Graph Neural Networks, Deep Learning, Resource Usage Forecasting, Resource Consumption, Service Performance
National Category
Computer Sciences Computer Systems
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-239525 (URN)10.1016/j.future.2025.107926 (DOI)001510777900001 ()2-s2.0-105007654758 (Scopus ID)
Funder
EU, Horizon 2020, 101135775EU, Horizon 2020, 101120990
Available from: 2025-06-03 Created: 2025-06-03 Last updated: 2025-06-30Bibliographically approved
Gulbaz, R., Townend, P. & Östberg, P.-O. (2025). GreenContinuum: a formal model of a smart grid-aware edge-cloud continuum for carbon and energy management. In: Proceeding: 2025 lEEE International Conference on Cloud Computing Technology and Science (CloudCom): Nov. 14 2025 to Nov. 16 2025 Shenzhen, China. Paper presented at 16th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2025), Shenzhen, China, November 14-16, 2025 (pp. 1-8). IEEE Computer Society
Open this publication in new window or tab >>GreenContinuum: a formal model of a smart grid-aware edge-cloud continuum for carbon and energy management
2025 (English)In: Proceeding: 2025 lEEE International Conference on Cloud Computing Technology and Science (CloudCom): Nov. 14 2025 to Nov. 16 2025 Shenzhen, China, IEEE Computer Society, 2025, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

The Edge-Cloud Continuum is a large-scale, loosely coupled system consisting of multiple stakeholders, regions, dynamic infrastructures, and conflicting objectives. With surging growth and demand, the Continuum’s energy and carbon footprint have massively increased, resulting in great operational expense, environmental impact, and strain on power grids. Methods to mitigate this face significant challenges: Quality of Service (QoS) guarantees must be balanced against not only carbon emissions, but the loadings, capacities, and QoS of the (smart) grids that power the underlying infrastructure. Integrated models to enable reasoning across both a Continuum and its associated SmartGrids are therefore required.

This work presents a formal model to reason across the integration of Smart Grids and the Edge-Cloud Continuum. Firstly, we identify the components, interactions, and properties crucial to mitigating cross-Continuum energy and carbon footprint while maintaining user, provider, and power grid QoS. We then present associated mathematical models to enable a model-based simulation to be developed based on our work. We present this simulation (all code is available for download) and use a simple scheduling algorithm to demonstrate the feasibility of utilizing knowledge from both the Smart Grid and EdgeCloud Continuum for carbon and energy management, showing that significant savings are possible

Place, publisher, year, edition, pages
IEEE Computer Society, 2025
Series
Proceedings (IEEE International Conference on Cloud Computing Technology and Science. Online), ISSN 2380-8004, E-ISSN 2330-2186
Keywords
Modelling, Simulation, Edge-Cloud Continuum, Smart Grid, Energy Consumption, Carbon Footprint
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-245789 (URN)10.1109/CloudCom67567.2025.11331501 (DOI)979-8-3315-6634-0 (ISBN)979-8-3315-6635-7 (ISBN)
Conference
16th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2025), Shenzhen, China, November 14-16, 2025
Funder
EU, Horizon EuropeWallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-10-22 Created: 2025-10-22 Last updated: 2026-03-06Bibliographically approved
Malla, I., Metsch, T. & Townend, P. (2025). Power aware cluster orchestration: taxonomy, initial results, and challenges. In: UCC '25: Proceedings of the 18th IEEE/ACM International Conference on Utility and Cloud Computing. Paper presented at Utility and Cloud Computing Conference, Nantes, France, December 1-14, 2025. ACM Publications, Article ID 53.
Open this publication in new window or tab >>Power aware cluster orchestration: taxonomy, initial results, and challenges
2025 (English)In: UCC '25: Proceedings of the 18th IEEE/ACM International Conference on Utility and Cloud Computing, ACM Publications, 2025, article id 53Conference paper, Published paper (Other (popular science, discussion, etc.))
Abstract [en]

Compute clusters are major power consumers in Cloud and Edge data centers, making it critical to reduce power usage and costs without compromising service levels objectives. Energy Performance Preference (EPP) settings and CPU frequency scaling can lower power but typically at the cost of reduced performance. When considering clusters with heterogeneous power profiles, it is essential to map workloads to the most suitable profile based on their quality-of-service constraints. Current orchestrators overlook power-profile heterogeneity; this is a particular concern at the Edge, where otherwise identical hardware may range from power-optimized to performance-oriented yet remain indistinguishable to schedulers. We present a taxonomy of power-aware orchestration, and extend the default Kubernetes scheduler with power-profile awareness. We evaluate the feasibility of this extended scheduler by comparing three power profile-aware scheduling strategies on a testbed running a microservices benchmark, with results showing that average power use can be reduced by up to 12% while maintaining application performance. We conclude with key challenges and future research directions.

Place, publisher, year, edition, pages
ACM Publications, 2025
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-247378 (URN)10.1145/3773274.3774698 (DOI)2-s2.0-105027188573 (Scopus ID)979-8-4007-2285-1 (ISBN)
Conference
Utility and Cloud Computing Conference, Nantes, France, December 1-14, 2025
Funder
EU, Horizon Europe, 101092711
Available from: 2025-12-09 Created: 2025-12-09 Last updated: 2026-02-16Bibliographically approved
Mudgal, A., Singh, M., Verma, A., Sahoo, K. S., Townend, P. & Bhuyan, M. (2025). Towards adaptive rule replacement for mitigating inference attacks in serverless SDN framework. In: Zuckerman D., Ulema M.;Limam N.; Kim Y.-T.; Granville L.Z.; Fulber-Garcia V (Ed.), Proceedings of IEEE/IFIP Network Operations and Management Symposium 2025, NOMS 2025: . Paper presented at IEEE/IFIP Network Operations and Management Symposium 12–16 May 2025, Honolulu, HI, USA (pp. 1-7). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Towards adaptive rule replacement for mitigating inference attacks in serverless SDN framework
Show others...
2025 (English)In: Proceedings of IEEE/IFIP Network Operations and Management Symposium 2025, NOMS 2025 / [ed] Zuckerman D., Ulema M.;Limam N.; Kim Y.-T.; Granville L.Z.; Fulber-Garcia V, Institute of Electrical and Electronics Engineers Inc. , 2025, p. 1-7Conference paper, Published paper (Refereed)
Abstract [en]

In the rapidly evolving landscape of Software-Defined Networking (SDN), the enhancement of security measures against sophisticated cyber threats is paramount. Among these threats, inference attacks pose a significant risk by allowing adversaries to deduce the configurations and policies of SDN switches, thereby undermining the integrity and confidentiality of the network infrastructure. To address this critical issue, we introduce a novel dynamic rule replacement policy for SDN switches, leveraging the capabilities of a Support Vector Machine (SVM) for its implementation. Our approach utilizes a comprehensive set of statistical features, including duration analysis of flow rules, dispersion of packet match fields, and frequency of packet arrivals to identify patterns indicative of potential inference attacks. By dynamically adjusting the rules within SDN switches based on the analysis of these features, our policy significantly enhances the resilience of the network against such attacks. To accelerate the innovation and development of network services, this study proposes an integrated SDN architecture deployed over a serverless framework. This work serves as a starting point to enable researchers to realize the concept of modular serverless functions over traditional SDN environments. We show during inference attacks how a serverless framework improves the latency and resource utilization of the network compared to a traditional SDN framework. This study demonstrates an improvement in preventing inference attacks without compromising the performance and efficiency of the SDN infrastructure.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
Series
IEEE Symposium on Network Operations and Management, ISSN 2374-9709, E-ISSN 1542-1201
Keywords
Inference Attack, Rule Replacement Policy, SDN, serverless
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-242805 (URN)10.1109/NOMS57970.2025.11073645 (DOI)2-s2.0-105012184549 (Scopus ID)979-8-3315-3163-8 (ISBN)979-8-3315-3164-5 (ISBN)
Conference
IEEE/IFIP Network Operations and Management Symposium 12–16 May 2025, Honolulu, HI, USA
Funder
EU, Horizon 2020, 101092711Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-08-13 Created: 2025-08-13 Last updated: 2025-08-13Bibliographically approved
Patel, Y. S. & Townend, P. (2024). A stable matching approach to energy efficient and sustainable serverless scheduling for the green cloud continuum. In: Proceedings: 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024. Paper presented at 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Shanghai, China, 15-18 July, 2024. (pp. 25-35). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A stable matching approach to energy efficient and sustainable serverless scheduling for the green cloud continuum
2024 (English)In: Proceedings: 18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 25-35Conference paper, Published paper (Refereed)
Abstract [en]

Cloud infrastructures are evolving from centralised systems to geographically distributed federations of edge devices, fog nodes, and clouds - often known as the Cloud-Edge Continuum. Continuum systems are dynamic, often massive in scale, and feature disparate infrastructure providers and platforms; this greatly increase the complexity of developing and managing applications. The Serverless paradigm shows the potential to greatly simplify the process of building Continuum applications - however, current scheduling mechanisms for Serverless Continuum platforms pay little attention to reducing the energy consumption and improving the sustainability of function execution. This is a significant omission, made worse as computing nodes within a Continuum may be powered by renewable energy sources that are intermittent and unpredictable, making low-powered and bottleneck nodes unavailable.There is great opportunity to design a decentralized energy management scheme for scheduling Serverless functions that takes advantage of the different layers of the Continuum, such as IoT devices located at the Edge, on-premises clusters closer to the data sources, or directly on large Cloud infrastructures. To achieve this, we formally model a green energy-aware Serverless workload scheduling problem for the multi-provider Cloud-Edge Continuum. We then design stable matching based technique for decentralized energy management (utilising a distributed controller) that considers the availability of green energy nodes and the QoS requirements of Serverless functions. We prove the complexity, stability and termination of the proposed heuristic algorithm, and also compare its performance with baseline scheduling techniques.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Cloud-Edge Continuum, Function-as-a-Service, Matching, Renewable energy, Scheduling, Serverless Computing
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-231522 (URN)10.1109/SOSE62363.2024.00010 (DOI)001327894000004 ()2-s2.0-85207640233 (Scopus ID)9798331539580 (ISBN)
Conference
18th IEEE International Conference on Service-Oriented System Engineering, SOSE 2024, Shanghai, China, 15-18 July, 2024.
Funder
The Kempe FoundationsEU, Horizon Europe, 101092711)
Available from: 2024-11-22 Created: 2024-11-22 Last updated: 2024-11-22Bibliographically approved
Lalaguna, A., Townend, P., Ojaghi, B. & Vázquez, C. (2024). Cooperative and connected mobility services in the cloud-edge continuum with function as a service technology and AI-enabled orchestration. In: Ahmed Bendahmane;Abdelaaziz El Hibaoui; Mohamed Essaaidi (Ed.), Proceedings of 2024 1st Edition of the Mediterranean Smart Cities Conference, MSCC 2024: . Paper presented at Mediterranean Smart Cities Conference (MSCC 2024), 2-4 May, 2024, Martil, Morocco (pp. 1-6). IEEE
Open this publication in new window or tab >>Cooperative and connected mobility services in the cloud-edge continuum with function as a service technology and AI-enabled orchestration
2024 (English)In: Proceedings of 2024 1st Edition of the Mediterranean Smart Cities Conference, MSCC 2024 / [ed] Ahmed Bendahmane;Abdelaaziz El Hibaoui; Mohamed Essaaidi, IEEE, 2024, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novelty system to manage Traffic Priority at city intersections by means of our Mobility-Hub (M-Hub), a next-generation Traffic Light Controller that leverages the power of cloud-edge continuum computing, Digital Twin, and Cellular Vehicle-to-Everything (C-V2X) technologies to transform traffic management into a dynamic and intelligent system. M-Hub acts as an open-edge computing platform, enabling real-time data processing, 3rd parties containerized applications and decision-making at the network edge. COGNIT is an open-source cloud-edge continuum framework, that offers many improvements for next-generation Intelligent Transportation Systems (ITS). The continuum allows for the integration of diverse data sources, including vehicular data from C-V2X communication, real-time traffic information from detectors or cameras, and other environmental data, to seamlessly generate Digital Twins in the ACISA smart mobility platform, SATURNO. By combining this data with advanced traffic optimization algorithms implemented in the COGNIT infrastructure, M-Hub can dynamically adjust traffic signal timings, optimize traffic flow, and reduce congestion with optimal use of computational resources. M-Hub has the potential to revolutionize urban mobility, enhancing safety, improving efficiency, and reducing environmental impact.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE InternationalConference on System of Systems Engineering, E-ISSN 2835-3161
Keywords
5G, AI, Automated Mobility services, C-V2X, Cloud-Edge Continuum, Connected, Cooperative, Digital Twin, Intelligent Transport Systems, Urban Mobility
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-231293 (URN)10.1109/MSCC62288.2024.10697004 (DOI)2-s2.0-85207056263 (Scopus ID)9798350374001 (ISBN)
Conference
Mediterranean Smart Cities Conference (MSCC 2024), 2-4 May, 2024, Martil, Morocco
Available from: 2024-11-12 Created: 2024-11-12 Last updated: 2024-11-12Bibliographically approved
Organisations

Search in DiVA

Show all publications