Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Personalized multi-tier federated learning
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-3451-2851
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.ORCID iD: 0000-0001-7320-1506
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9842-7840
2025 (English)In: Neural information processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, proceedings, part II / [ed] Mufti Mahmud; Maryam Doborjeh; Kevin Wong; Andrew Chi Sing Leung; Zohreh Doborjeh; M. Tanveer, Springer Nature, 2025, p. 192-207Conference paper, Published paper (Refereed)
Abstract [en]

The key challenge of personalized federated learning (PerFL) is to capture the statistical heterogeneity properties of data with inexpensive communications and gain customized performance for participating devices. To address these, we introduced personalized federated learning in multi-tier architecture (PerMFL) to obtain optimized and personalized local models when there are known team structures across devices. We provide theoretical guarantees of PerMFL, which offers linear convergence rates for smooth strongly convex problems and sub-linear convergence rates for smooth non-convex problems. We conduct numerical experiments demonstrating the robust empirical performance of PerMFL, outperforming the state-of-the-art in multiple personalized federated learning tasks.

Place, publisher, year, edition, pages
Springer Nature, 2025. p. 192-207
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2283
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-228859DOI: 10.1007/978-981-96-6951-6_14Scopus ID: 2-s2.0-105010821830ISBN: 978-981-96-6950-9 (print)ISBN: 978-981-96-6951-6 (electronic)OAI: oai:DiVA.org:umu-228859DiVA, id: diva2:1892627
Conference
ICONIP 2024, 31st International Conference on Neural Information Processing, Auckland, New Zealand, December 2-6, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2025-07-28Bibliographically approved
In thesis
1. Advancing federated learning: algorithms and use-cases
Open this publication in new window or tab >>Advancing federated learning: algorithms and use-cases
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Förbättrad federerad maskininlärning : algoritmer och tillämpningar
Abstract [en]

Federated Learning (FL) is a distributed machine learning paradigm that enables the training of models across numerous clients or organizations without requiring the transfer of local data. This method addresses concerns about data privacy and ownership by keeping raw data on the client itself and only sharing model updates with a central server. Despite its benefits, federated learning faces unique challenges, such as data heterogeneity, computation and communication overheads, and the need for personalized models. Thereby results in reduced model performance, lower efficiency, and longer training times.

This thesis investigates these issues from theoretical, empirical, and practical application perspectives with four-fold contributions, such as federated feature selection, adaptive client selection, model personalization, and socio-cognitive applications. Firstly, we addressed the data heterogeneity problems for federated feature selection in horizontal FL by developing algorithms based on mutual information and multi-objective optimization. Secondly, we tackled system heterogeneity issues that involved variations in computation, storage, and communication capabilities among clients. We proposed a solution that ranks clients with multi-objective optimization for efficient, fair, and adaptive participation in model training. Thirdly, we addressed the issue of client drift caused by data heterogeneity in hierarchical federated learning with a personalized federated learning approach. Lastly, we focused on two key applications that benefit from the FL framework but suffer from data heterogeneity issues. The first application attempts to predict the level of autobiographic memory recall of events associated with the lifelog image by developing clustered personalized FL algorithms, which help in selecting effective lifelog image cues for cognitive interventions for the clients. The second application is the development of a personal image privacy advisor for each client. Along with data heterogeneity, the privacy advisor faces data scarcity issues. We developed a daisy chain-enabled clustered personalized FL algorithm, which predicts whether an image should be shared, kept private, or recommended for sharing by a third party.

Our findings reveal that the proposed methods significantly outperformed the current state-of-the-art FL  algorithms. Our methods deliver superior performance, earlier convergence, and training efficiency.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. p. 84
Series
Report / UMINF, ISSN 0348-0542 ; 24.09
Keywords
Federated Learning, Federated Feature Selection, Statistical Heterogeneity, System Heterogeneity, Model Personalization, Socio-Cognitive Applications
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-228863 (URN)978-91-8070-463-2 (ISBN)978-91-8070-464-9 (ISBN)
Public defence
2024-09-23, Hörsal HUM.D.210, Humanisthuset, Umeå, 13:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-09-02 Created: 2024-08-27 Last updated: 2024-08-28Bibliographically approved
2. Personalized models and optimization in federated learning
Open this publication in new window or tab >>Personalized models and optimization in federated learning
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Personanpassade modeller och optimering inom federerad inlärning
Abstract [en]

The rapid increase in data generation, combined with the impracticality of centralizing large-scale datasets and the growing complexity of machine learning tasks, has driven the development of distributed learning techniques. Among these, Federated Learning (FL) has gained significant attention due to its privacy-preserving approach, where multiple clients collaboratively train a global model without sharing their local data. However, FL faces several key challenges, including data heterogeneity, high computational costs, and communication inefficiencies. These issues become more pronounced in real-world scenarios where client data distributions are non-IID, computational resources are limited, and communication is constrained.

This thesis addresses these challenges through the development of efficient algorithms for Personalized Federated Learning (pFL) and Constrained Federated Learning. The proposed approaches are designed to handle heterogeneous data, minimize computational overhead, and reduce communication costs while maintaining strong theoretical guarantees.

Specifically, the thesis introduces three key contributions: (1) pFLMF, a novel pFL formulation based on low-rank matrix optimization, leveraging Burer-Monteiro factorization to enable personalization without relying on predefined distance metrics. (2) PerMFL, an algorithm for multi-tier pFL that introduces personalized decision variables for both teams and individual devices, enabling efficient optimization in scenarios with hierarchical client structures. (3) FedFW, a projection-free algorithm for constrained FL, which emphasizes low computational cost, privacy preservation, and communication efficiency through sparse signal exchanges.

By addressing critical issues in FL, such as data heterogeneity, computation costs, and communication bottlenecks, the proposed algorithms advance the field of Federated Learning, providing robust and scalable solutions for real-world applications. 

Place, publisher, year, edition, pages
Umeå: Umeå University, 2025. p. 30
Series
Research report in mathematical statistics, ISSN 1653-0829
Keywords
Federated Learning, Machine Learning, Optimization
National Category
Computational Mathematics
Identifiers
urn:nbn:se:umu:diva-234464 (URN)9789180706001 (ISBN)9789180705998 (ISBN)
Public defence
2025-02-12, UB.A.220, Samhällsvetarhuset, 09:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-01-29 Created: 2025-01-23 Last updated: 2025-01-27Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Banerjee, SourasekharDadras, AliYurtsever, AlpBhuyan, Monowar H.

Search in DiVA

By author/editor
Banerjee, SourasekharDadras, AliYurtsever, AlpBhuyan, Monowar H.
By organisation
Department of Computing ScienceDepartment of Mathematics and Mathematical Statistics
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 383 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf