Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Banerjee, SourasekharORCID iD iconorcid.org/0000-0002-3451-2851
Publications (10 of 12) Show all publications
Banerjee, S., Dadras, A., Yurtsever, A. & Bhuyan, M. H. (2025). Personalized multi-tier federated learning. In: Mufti Mahmud; Maryam Doborjeh; Kevin Wong; Andrew Chi Sing Leung; Zohreh Doborjeh; M. Tanveer (Ed.), Neural information processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, proceedings, part II. Paper presented at ICONIP 2024, 31st International Conference on Neural Information Processing, Auckland, New Zealand, December 2-6, 2024 (pp. 192-207). Springer Nature
Open this publication in new window or tab >>Personalized multi-tier federated learning
2025 (English)In: Neural information processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, proceedings, part II / [ed] Mufti Mahmud; Maryam Doborjeh; Kevin Wong; Andrew Chi Sing Leung; Zohreh Doborjeh; M. Tanveer, Springer Nature, 2025, p. 192-207Conference paper, Published paper (Refereed)
Abstract [en]

The key challenge of personalized federated learning (PerFL) is to capture the statistical heterogeneity properties of data with inexpensive communications and gain customized performance for participating devices. To address these, we introduced personalized federated learning in multi-tier architecture (PerMFL) to obtain optimized and personalized local models when there are known team structures across devices. We provide theoretical guarantees of PerMFL, which offers linear convergence rates for smooth strongly convex problems and sub-linear convergence rates for smooth non-convex problems. We conduct numerical experiments demonstrating the robust empirical performance of PerMFL, outperforming the state-of-the-art in multiple personalized federated learning tasks.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2283
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228859 (URN)10.1007/978-981-96-6951-6_14 (DOI)2-s2.0-105010821830 (Scopus ID)978-981-96-6950-9 (ISBN)978-981-96-6951-6 (ISBN)
Conference
ICONIP 2024, 31st International Conference on Neural Information Processing, Auckland, New Zealand, December 2-6, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2025-07-28Bibliographically approved
Banerjee, S., Roy, D., Subbaraju, V. & Bhuyan, M. H. (2025). Predicting event memorability using personalized federated learning. In: 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): . Paper presented at IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, Arizona, USA, Febrary 28 - March 4, 2025 (pp. 1556-1565). IEEE
Open this publication in new window or tab >>Predicting event memorability using personalized federated learning
2025 (English)In: 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), IEEE, 2025, p. 1556-1565Conference paper, Published paper (Other academic)
Abstract [en]

Lifelog images are very useful as memory cues for recalling past events. Estimating the level of event memory recall induced by a given lifelog image (event memorability) is useful for selecting images for cognitive interventions. Previous works for predicting event memorability follow a centralized model training paradigm that requires several users to share their lifelog images. This risks violating the privacy of individual lifeloggers. Alternatively, a personal model trained with a lifelogger’s own data guarantees privacy. However, it imposes significant effort on the lifelogger to provide a large enough sample of self-rated images to develop a well-performing model for event memorability. Therefore, we propose a clustered personalized federated learning setup, FedMEM, that avoids sharing raw images but still enables collaborative learning via model sharing. For an enhanced learning performance in the presence of data heterogeneity, FedMEM evaluates similarity among users to group them into clusters. We demonstrate that our approach furnishes high-performing personalized models compared to the state-of-the-art.

Place, publisher, year, edition, pages
IEEE, 2025
Series
Proceedings (IEEE Workshop on Applications of Computer Vision), ISSN 2472-6737, E-ISSN 2642-9381
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228858 (URN)10.1109/WACV61041.2025.00159 (DOI)001481328900149 ()2-s2.0-105003632963 (Scopus ID)979-8-3315-1084-8 (ISBN)979-8-3315-1083-1 (ISBN)
Conference
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, Arizona, USA, Febrary 28 - March 4, 2025
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2025-09-26Bibliographically approved
Banerjee, S. (2024). Advancing federated learning: algorithms and use-cases. (Doctoral dissertation). Umeå: Umeå University
Open this publication in new window or tab >>Advancing federated learning: algorithms and use-cases
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Förbättrad federerad maskininlärning : algoritmer och tillämpningar
Abstract [en]

Federated Learning (FL) is a distributed machine learning paradigm that enables the training of models across numerous clients or organizations without requiring the transfer of local data. This method addresses concerns about data privacy and ownership by keeping raw data on the client itself and only sharing model updates with a central server. Despite its benefits, federated learning faces unique challenges, such as data heterogeneity, computation and communication overheads, and the need for personalized models. Thereby results in reduced model performance, lower efficiency, and longer training times.

This thesis investigates these issues from theoretical, empirical, and practical application perspectives with four-fold contributions, such as federated feature selection, adaptive client selection, model personalization, and socio-cognitive applications. Firstly, we addressed the data heterogeneity problems for federated feature selection in horizontal FL by developing algorithms based on mutual information and multi-objective optimization. Secondly, we tackled system heterogeneity issues that involved variations in computation, storage, and communication capabilities among clients. We proposed a solution that ranks clients with multi-objective optimization for efficient, fair, and adaptive participation in model training. Thirdly, we addressed the issue of client drift caused by data heterogeneity in hierarchical federated learning with a personalized federated learning approach. Lastly, we focused on two key applications that benefit from the FL framework but suffer from data heterogeneity issues. The first application attempts to predict the level of autobiographic memory recall of events associated with the lifelog image by developing clustered personalized FL algorithms, which help in selecting effective lifelog image cues for cognitive interventions for the clients. The second application is the development of a personal image privacy advisor for each client. Along with data heterogeneity, the privacy advisor faces data scarcity issues. We developed a daisy chain-enabled clustered personalized FL algorithm, which predicts whether an image should be shared, kept private, or recommended for sharing by a third party.

Our findings reveal that the proposed methods significantly outperformed the current state-of-the-art FL  algorithms. Our methods deliver superior performance, earlier convergence, and training efficiency.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. p. 84
Series
Report / UMINF, ISSN 0348-0542 ; 24.09
Keywords
Federated Learning, Federated Feature Selection, Statistical Heterogeneity, System Heterogeneity, Model Personalization, Socio-Cognitive Applications
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-228863 (URN)978-91-8070-463-2 (ISBN)978-91-8070-464-9 (ISBN)
Public defence
2024-09-23, Hörsal HUM.D.210, Humanisthuset, Umeå, 13:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-09-02 Created: 2024-08-27 Last updated: 2024-08-28Bibliographically approved
Banerjee, S., Bhuyan, D., Elmroth, E. & Bhuyan, M. H. (2024). Cost-efficient feature selection for horizontal federated learning. IEEE Transactions on Artificial Intelligence, 5(12), 6551-6565
Open this publication in new window or tab >>Cost-efficient feature selection for horizontal federated learning
2024 (English)In: IEEE Transactions on Artificial Intelligence, E-ISSN 2691-4581, Vol. 5, no 12, p. 6551-6565Article in journal (Refereed) Published
Abstract [en]

Horizontal Federated Learning exhibits substantial similarities in feature space across distinct clients. However, not all features contribute significantly to the training of the global model. Moreover, the curse of dimensionality delays the training. Therefore, reducing irrelevant and redundant features from the feature space makes training faster and inexpensive. This work aims to identify the common feature subset from the clients in federated settings. We introduce a hybrid approach called Fed-MOFS 1 , utilizing Mutual Information and Clustering for local feature selection at each client. Unlike the Fed-FiS, which uses a scoring function for global feature ranking, Fed-MOFS employs multi-objective optimization to prioritize features based on their higher relevance and lower redundancy. This paper compares the performance of Fed-MOFS 2 with conventional and federated feature selection methods. Moreover, we tested the scalability, stability, and efficacy of both Fed-FiS and Fed-MOFS across diverse datasets. We also assessed how feature selection influenced model convergence and explored its impact in scenarios with data heterogeneity. Our results show that Fed-MOFS enhances global model performance with a 50% reduction in feature space and is at least twice as fast as the FSHFL method. The computational complexity for both approaches is O( d 2 ), which is lower than the state-of-the-art.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Feature extraction, Computational modeling, Data models, Training, Federated learning, Artificial intelligence, Servers, Clustering, Horizontal Federated Learning, Feature Selection, Mutual Information, Multi-objective Optimization
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228215 (URN)10.1109/TAI.2024.3436664 (DOI)2-s2.0-85200235298 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-08-05 Created: 2024-08-05 Last updated: 2025-01-13Bibliographically approved
Dadras, A., Banerjee, S., Prakhya, K. & Yurtsever, A. (2024). Federated Frank-Wolfe algorithm. In: Albert Bifet; Jesse Davis; Tomas Krilavičius; Meelis Kull; Eirini Ntoutsi; Indrė Žliobaitė (Ed.), Machine learning and knowledge discovery in databases. Research track: European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, proceedings, part III. Paper presented at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2024), Vilnius, Lithuania, September 9-13, 2024 (pp. 58-75). Springer Nature
Open this publication in new window or tab >>Federated Frank-Wolfe algorithm
2024 (English)In: Machine learning and knowledge discovery in databases. Research track: European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, proceedings, part III / [ed] Albert Bifet; Jesse Davis; Tomas Krilavičius; Meelis Kull; Eirini Ntoutsi; Indrė Žliobaitė, Springer Nature, 2024, p. 58-75Conference paper, Published paper (Refereed)
Abstract [en]

Federated learning (FL) has gained a lot of attention in recent years for building privacy-preserving collaborative learning systems. However, FL algorithms for constrained machine learning problems are still limited, particularly when the projection step is costly. To this end, we propose a Federated Frank-Wolfe Algorithm (FedFW). FedFW features data privacy, low per-iteration cost, and communication of sparse signals. In the deterministic setting, FedFW achieves an ε-suboptimal solution within O(ε-2) iterations for smooth and convex objectives, and O(ε-3) iterations for smooth but non-convex objectives. Furthermore, we present a stochastic variant of FedFW and show that it finds a solution within O(ε-3) iterations in the convex setting. We demonstrate the empirical performance of FedFW on several machine learning tasks.

Place, publisher, year, edition, pages
Springer Nature, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14943
Keywords
federated learning, frank wolfe, conditional gradient method, projection-free, distributed optimization
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228614 (URN)10.1007/978-3-031-70352-2_4 (DOI)001308375900004 ()978-3-031-70351-5 (ISBN)978-3-031-70352-2 (ISBN)
Conference
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2024), Vilnius, Lithuania, September 9-13, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Research Council, 2023-05476
Note

Also part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI). 

Available from: 2024-08-19 Created: 2024-08-19 Last updated: 2025-04-24Bibliographically approved
Banerjee, S., Patel, Y. S., Kumar, P. & Bhuyan, M. H. (2023). Towards post-disaster damage assessment using deep transfer learning and GAN-based data augmentation. In: ICDCN '23: Proceedings of the 24th International Conference on Distributed Computing and Networking. Paper presented at 24th International Conference on Distributed Computing and Networking, ICDCN 2023, Kharagpur, Inda, January 4-7, 2023 (pp. 372-377). ACM Digital Library
Open this publication in new window or tab >>Towards post-disaster damage assessment using deep transfer learning and GAN-based data augmentation
2023 (English)In: ICDCN '23: Proceedings of the 24th International Conference on Distributed Computing and Networking, ACM Digital Library, 2023, p. 372-377Conference paper, Published paper (Refereed)
Abstract [en]

Cyber-physical disaster systems (CPDS) are a new cyber-physical application that collects physical realm measurements from IoT devices and sends them to the edge for damage severity analysis of impacted sites in the aftermath of a large-scale disaster. However, the lack of effective machine learning paradigms and the data and device heterogeneity of edge devices pose significant challenges in disaster damage assessment (DDA). To address these issues, we propose a generative adversarial network (GAN) and a lightweight, deep transfer learning-enabled, fine-tuned machine learning pipeline to reduce overall sensing error and improve the model's performance. In this paper, we applied several combinations of GANs (i.e., DCGAN, DiscoGAN, ProGAN, and Cycle-GAN) to generate fake images of the disaster. After that, three pre-trained models: VGG19, ResNet18, and DenseNet121, with deep transfer learning, are applied to classify the images of the disaster. We observed that the ResNet18 is the most pertinent model to achieve a test accuracy of 88.81%. With the experiments on real-world DDA applications, we have visualized the damage severity of disaster-impacted sites using different types of Class Activation Mapping (CAM) techniques, namely Grad-CAM++, Guided Grad-Cam, & Score-CAM. Finally, using k-means clustering, we have obtained the scatter plots to measure the damage severity into no damage, mild damage, and severe damage categories in the generated heat maps.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
Series
ACM International Conference Proceeding Series
Keywords
Class Activation Mapping, Clustering., Cyber-Physical Systems, Damage Assessment, Deep Learning, Generative Adversarial Networks
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-203553 (URN)10.1145/3571306.3571438 (DOI)001098722500054 ()2-s2.0-85145875463 (Scopus ID)9781450397964 (ISBN)
Conference
24th International Conference on Distributed Computing and Networking, ICDCN 2023, Kharagpur, Inda, January 4-7, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Available from: 2023-01-19 Created: 2023-01-19 Last updated: 2025-04-24Bibliographically approved
Banerjee, S., Ghosh, S. & Mishra, B. K. (2022). Application of deep learning for energy management in smart grid. In: Debi Prasanna Acharjya; Anirban Mitra; Noor Zaman (Ed.), Deep learning in data analytics: recent techniques, practices and applications (pp. 221-239). Springer
Open this publication in new window or tab >>Application of deep learning for energy management in smart grid
2022 (English)In: Deep learning in data analytics: recent techniques, practices and applications / [ed] Debi Prasanna Acharjya; Anirban Mitra; Noor Zaman, Springer, 2022, , p. 19p. 221-239Chapter in book (Refereed)
Abstract [en]

In the modern electronic power system, energy management and load forecasting are important tasks. Energy management systems are designed concerning monitoring and optimizing the energy requirement in smart systems. This research work is divided into two parts. The first part will contain load forecasting and energy management in a smart grid. Load forecasting in the smart grid can be divided into three parts long-term, mid-term, and short-term load forecasting. The second part will describe energy usage optimization for the electric vehicle. Here we will show grids to vehicle energy demand management and optimization. This chapter will first introduce different deep learning techniques and then discuss their applications related to smart-grid and smart vehicle.

Place, publisher, year, edition, pages
Springer, 2022. p. 19
Series
Studies in big data, ISSN 2197-6503, E-ISSN 2197-6511 ; 91
National Category
Computer Sciences Energy Systems
Identifiers
urn:nbn:se:umu:diva-197791 (URN)10.1007/978-3-030-75855-4_13 (DOI)2-s2.0-85132891883 (Scopus ID)978-3-030-75854-7 (ISBN)978-3-030-75855-4 (ISBN)
Available from: 2022-07-05 Created: 2022-07-05 Last updated: 2023-03-24Bibliographically approved
Banerjee, S., Vu, X.-S. & Bhuyan, M. H. (2022). Optimized and adaptive federated learning for straggler-resilient device selection. In: 2022 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, July 18-23, 2022 (pp. 1-9). IEEE
Open this publication in new window or tab >>Optimized and adaptive federated learning for straggler-resilient device selection
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN), IEEE, 2022, p. 1-9Conference paper, Published paper (Refereed)
Abstract [en]

Federated Learning (FL) has evolved as a promising distributed learning paradigm in which data samples are disseminated over massively connected devices in an IID (Identical and Independent Distribution) or non-IID manner. FL follows a collaborative training approach where each device uses local training data to train local models, and the server generates a global model by combining the local model's parameters. However, FL is vulnerable to system heterogeneity when local devices have varying computational, storage, and communication capabilities over time. The presence of stragglers or low-performing devices in the learning process severely impacts the scalability of FL algorithms and significantly delays convergence. To mitigate this problem, we propose Fed-MOODS, a Multi-Objective Optimization-based Device Selection approach to reduce the effect of stragglers in the FL process. The primary criteria for optimization are to maximize: (i) the availability of the processing capacity of each device, (ii) the availability of the memory in devices, and (iii) the bandwidth capacity of the participating devices. The multi-objective optimization prioritizes devices from fast to slow. The approach involves faster devices in early global rounds and gradually incorporating slower devices from the Pareto fronts to improve the model's accuracy. The overall training time of Fed-MOODS is 1.8× and 1.48× faster than the baseline model (FedAvg) with random device selection for MNIST and FMNIST non-IID data, respectively. Fed-MOODS is extensively evaluated under multiple experimental settings, and the results show that Fed-MOODS has significantly improved model's convergence and performance. Fed-MOODS maintains fairness in the prioritized participation of devices and the model for both IID and non-IID settings.

Place, publisher, year, edition, pages
IEEE, 2022
Series
Proceedings of International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407
National Category
Robotics and automation
Identifiers
urn:nbn:se:umu:diva-199974 (URN)10.1109/IJCNN55064.2022.9892777 (DOI)000867070907014 ()2-s2.0-85140800465 (Scopus ID)
Conference
2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, July 18-23, 2022
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2022-10-03 Created: 2022-10-03 Last updated: 2025-02-09Bibliographically approved
Banerjee, S., Yurtsever, A. & Bhuyan, M. H. (2022). Personalized multi-tier federated learning. In: : . Paper presented at FL-NeurIPS'22, International Workshop on Federated Learning: Recent Advances and New Challenges in Conjunction with NeurIPS 2022, New Orleans, LA, USA, December 2, 2022.
Open this publication in new window or tab >>Personalized multi-tier federated learning
2022 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

The challenge of personalized federated learning (pFL) is to capture the heterogeneity properties of data with in-expensive communications and achieving customized performance for devices. To address that challenge, we introduced personalized multi-tier federated learning using Moreau envelopes (pFedMT) when there are known cluster structures within devices. Moreau envelopes are used as the devices’ and teams’ regularized loss functions. Empirically, we verify that the personalized model performs better than vanilla FedAvg, per-FedAvg, and pFedMe. pFedMT achieves 98.30% and 99.71% accuracy on MNIST dataset under convex and non-convex settings, respectively.

Keywords
Multi-tier Federate Learning, Personalization, Statistical Heterogeneity, Distributed Optimization, In-expensive Communication, Moreau Envelope
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-200943 (URN)
Conference
FL-NeurIPS'22, International Workshop on Federated Learning: Recent Advances and New Challenges in Conjunction with NeurIPS 2022, New Orleans, LA, USA, December 2, 2022
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2022-11-10 Created: 2022-11-10 Last updated: 2024-08-26Bibliographically approved
Banerjee, S., Elmroth, E. & Bhuyan, M. H. (2021). Fed-FiS: A Novel Information-Theoretic Federated Feature Selection for Learning Stability. In: Teddy Mantoro, Minho Lee, Media Anugerah Ayu, Kok Wai Wong, Achmad Nizar Hidayanto (Ed.), Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part V. Paper presented at ICONIP: International Conference on Neural Information Processing, Virtual, December 8-12, 2021 (pp. 480-487). Springer Nature, 1516
Open this publication in new window or tab >>Fed-FiS: A Novel Information-Theoretic Federated Feature Selection for Learning Stability
2021 (English)In: Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part V / [ed] Teddy Mantoro, Minho Lee, Media Anugerah Ayu, Kok Wai Wong, Achmad Nizar Hidayanto, Springer Nature, 2021, Vol. 1516, p. 480-487Conference paper, Published paper (Refereed)
Abstract [en]

In the era of big data and federated learning, traditional feature selection methods show unacceptable performance for handling heterogeneity when deployed in federated environments. We propose Fed-FiS, an information-theoretic federated feature selection approach to overcome the problem occur due to heterogeneity. Fed-FiS estimates feature-feature mutual information (FFMI) and feature-class mutual information (FCMI) to generate a local feature subset in each user device. Based on federated values across features and classes obtained from each device, the central server ranks each feature and generates a global dominant feature subset. We show that our approach can find stable features subset collaboratively from all local devices. Extensive experiments based on multiple benchmark iid (independent and identically distributed) and non-iid datasets demonstrate that Fed-FiS significantly improves overall performance in comparison to the state-of-the-art methods. This is the first work on feature selection in a federated learning system to the best of our knowledge.

Place, publisher, year, edition, pages
Springer Nature, 2021
Series
Communications in Computer and Information Science (CCIS), ISSN 1865-0929, E-ISSN 1865-0937 ; 1516
Keywords
Federated learning, Feature selection, Mutual information, Classification, Statistical heterogeneity
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-190137 (URN)10.1007/978-3-030-92307-5_56 (DOI)001420205900054 ()2-s2.0-85121934752 (Scopus ID)978-3-030-92306-8 (ISBN)978-3-030-92307-5 (ISBN)
Conference
ICONIP: International Conference on Neural Information Processing, Virtual, December 8-12, 2021
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011342
Available from: 2021-12-06 Created: 2021-12-06 Last updated: 2025-04-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3451-2851

Search in DiVA

Show all publications