Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 34) Show all publications
Guarrasi, V., Aksu, F., Caruso, C. M., Di Feola, F., Rofena, A., Ruffini, F. & Soda, P. (2025). A systematic review of intermediate fusion in multimodal deep learning for biomedical applications. Image and Vision Computing, 158, Article ID 105509.
Open this publication in new window or tab >>A systematic review of intermediate fusion in multimodal deep learning for biomedical applications
Show others...
2025 (English)In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 158, article id 105509Article in journal (Refereed) Published
Abstract [en]

Deep learning has revolutionized biomedical research by providing sophisticated methods to handle complex, high-dimensional data. Multimodal deep learning (MDL) further enhances this capability by integrating diverse data types such as imaging, textual data, and genetic information, leading to more robust and accurate predictive models. In MDL, differently from early and late fusion methods, intermediate fusion stands out for its ability to effectively combine modality-specific features during the learning process. This systematic review comprehensively analyzes and formalizes current intermediate fusion methods in biomedical applications, highlighting their effectiveness in improving predictive performance and capturing complex inter-modal relationships. We investigate the techniques employed, the challenges faced, and potential future directions for advancing intermediate fusion methods. Additionally, we introduce a novel structured notation that standardizes intermediate fusion architectures, enhancing understanding and facilitating implementation across various domains. Our findings provide actionable insights and practical guidelines intended to support researchers, healthcare professionals, and the broader deep learning community in developing more sophisticated and insightful multimodal models. Through this review, we aim to provide a foundational framework for future research and practical applications in the dynamic field of MDL.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Biomedical data, Data fusion, Data integration, Fusion techniques, Healthcare, Joint fusion
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-237396 (URN)10.1016/j.imavis.2025.105509 (DOI)2-s2.0-105001226580 (Scopus ID)
Funder
The Kempe Foundations, JCSMK24-0094
Available from: 2025-04-10 Created: 2025-04-10 Last updated: 2025-04-10Bibliographically approved
Mogensen, K., Guarrasi, V., Larsson, J., Hansson, W., Wåhlin, A., Koskinen, L.-O. D., . . . Qvarlander, S. (2025). An optimized ensemble search approach for classification of higher-level gait disorder using brain magnetic resonance images. Computers in Biology and Medicine, 184, Article ID 109457.
Open this publication in new window or tab >>An optimized ensemble search approach for classification of higher-level gait disorder using brain magnetic resonance images
Show others...
2025 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 184, article id 109457Article in journal (Refereed) Published
Abstract [en]

Higher-Level Gait Disorder (HLGD) is a type of gait disorder estimated to affect up to 6% of the older population. By definition, its symptoms originate from the higher-level nervous system, yet its association with brain morphology remains unclear. This study hypothesizes that there are patterns in brain morphology linked to HLGD. For the first time in the literature, this work investigates whether deep learning, in the form of convolutional neural networks, can capture patterns in magnetic resonance images to identify individuals affected by HLGD. To handle this new classification task, we propose setting up an ensemble of models. This leverages the benefits of combining classifiers instead of determining which network is the most suitable, developing a new architecture, or customizing an existing one. We introduce a computationally cost-effective search algorithm to find the optimal ensemble by leveraging a cost function of both traditional performance scores and the diversity among the models. Using a unique dataset from a large population-based cohort (VESPR), the ensemble identified by our algorithm demonstrated superior performance compared to single networks, other ensemble fusion techniques, and the best linear radiological measure. This emphasizes the importance of implementing diversity into the cost function. Furthermore, the results indicate significant morphological differences in brain structure between HLGD-affected individuals and controls, motivating research about which areas the networks base their classifications on, to get a better understanding of the pathophysiology of HLGD.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Artificial intelligence, CNN, Convolutional neural networks, Ensemble learning, Gait disorder, Medical imaging, MRI, Neurological disorders, Normal pressure hydrocephalus, Optimization
National Category
Neurosciences
Identifiers
urn:nbn:se:umu:diva-232782 (URN)10.1016/j.compbiomed.2024.109457 (DOI)2-s2.0-85210376400 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, RMX18-0152Swedish Research Council, 2021-00711_VR/JPNDUmeå UniversityRegion Västerbotten
Available from: 2024-12-13 Created: 2024-12-13 Last updated: 2024-12-13Bibliographically approved
Mantegna, M., Tronchin, L., Tortora, M. & Soda, P. (2025). Benchmarking GAN-based vs classical data augmentation on biomedical images. In: Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya (Ed.), Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II. Paper presented at 27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024 (pp. 92-104). Springer Science and Business Media Deutschland GmbH
Open this publication in new window or tab >>Benchmarking GAN-based vs classical data augmentation on biomedical images
2025 (English)In: Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II / [ed] Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya, Springer Science and Business Media Deutschland GmbH , 2025, p. 92-104Conference paper, Published paper (Refereed)
Abstract [en]

The medical field faces significant data shortages due to the high image acquisition and maintenance costs. Data Augmentation aims to mitigate this by increasing data availability and enhancing image generalization. However, traditional DA methods often produce data with limited quality and diversity. Generative Adversarial Networks present a promising alternative, offering potential solutions to data scarcity issues. This paper evaluates the impact of GAN-based data augmentation in medical imaging and provides a benchmark for the efficacy of synthetic data in downstream classification tasks. To this aim, we performed a wide set of tests using three different GAN architectures on six 2D datasets from the standardized MedMNIST biomedical image collection, conducting a total of 696 experiments. Our results reveal that while GAN-based DA methods show promise with low-dimensional datasets, traditional DA methods still outperform them.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15615
Keywords
Data Augmentation, Deep Learning, GANs, Medical imaging, MedMNIST
National Category
Computer Sciences Computer graphics and computer vision
Identifiers
urn:nbn:se:umu:diva-239124 (URN)10.1007/978-3-031-87660-8_7 (DOI)2-s2.0-105004792609 (Scopus ID)978-3-031-87659-2 (ISBN)978-3-031-87660-8 (ISBN)
Conference
27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024
Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2025-05-27Bibliographically approved
Francesconi, A., di Biase, L., Cappetta, D., Rebecchi, F., Soda, P., Sicilia, R. & Guarrasi, V. (2025). Class balancing diversity multimodal ensemble for Alzheimer's disease diagnosis and early detection. Computerized Medical Imaging and Graphics, 123, Article ID 102529.
Open this publication in new window or tab >>Class balancing diversity multimodal ensemble for Alzheimer's disease diagnosis and early detection
Show others...
2025 (English)In: Computerized Medical Imaging and Graphics, ISSN 0895-6111, E-ISSN 1879-0771, Vol. 123, article id 102529Article in journal (Refereed) Published
Abstract [en]

Alzheimer's disease (AD) poses significant global health challenges due to its increasing prevalence and associated societal costs. Early detection and diagnosis of AD are critical for delaying progression and improving patient outcomes. Traditional diagnostic methods and single-modality data often fall short in identifying early-stage AD and distinguishing it from Mild Cognitive Impairment (MCI). This study addresses these challenges by introducing a novel approach: multImodal enseMble via class BALancing diversity for iMbalancEd Data (IMBALMED). IMBALMED integrates multimodal data from the Alzheimer's Disease Neuroimaging Initiative database, including clinical assessments, neuroimaging phenotypes, biospecimen, and subject characteristics data. It employs a new ensemble of model classifiers, designed specifically for this framework, which combines eight distinct families of learning paradigms trained with diverse class balancing techniques to overcome class imbalance and enhance model accuracy. We evaluate IMBALMED on two diagnostic tasks (binary and ternary classification) and four binary early detection tasks (at 12, 24, 36, and 48 months), comparing its performance with state-of-the-art algorithms and an unbalanced dataset method. To further validate the proposed model and ensure genuine generalization to real-world scenarios, we conducted an external validation experiment using data from the most recent phase of the ADNI dataset. IMBALMED demonstrates superior diagnostic accuracy and predictive performance in both binary and ternary classification tasks, significantly improving early detection of MCI at a 48-month time point and showing excellent generalizability in the 12-month task during external validation. The method shows improved classification performance and robustness, offering a promising solution for early detection and management of AD.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Ensemble learning, Imbalance learning, Machine learning, Mild cognitive impairment, Multimodal data, Tabular data
National Category
Radiology and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-237366 (URN)10.1016/j.compmedimag.2025.102529 (DOI)001457391300001 ()40147216 (PubMedID)2-s2.0-105001418177 (Scopus ID)
Available from: 2025-04-23 Created: 2025-04-23 Last updated: 2025-04-23Bibliographically approved
Aksu, F., Cordelli, E., Gelardi, F., Chiti, A. & Soda, P. (2025). Enhancing NSCLC histological subtype classification: a federated learning approach using triplet loss. In: Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya (Ed.), Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II. Paper presented at 27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024 (pp. 154-168). Springer Nature
Open this publication in new window or tab >>Enhancing NSCLC histological subtype classification: a federated learning approach using triplet loss
Show others...
2025 (English)In: Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II / [ed] Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya, Springer Nature, 2025, p. 154-168Conference paper, Published paper (Refereed)
Abstract [en]

Lung cancer remains one of the leading causes of cancer-related deaths worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Accurate histological subtype classification of NSCLC is crucial for personalized treatment planning and improving patient outcomes. Developing robust classification models for NSCLC subtypes often requires large, diverse datasets, which can be challenging to obtain due to privacy concerns and data silos. This study proposes an approach combining federated learning with triplet loss to address these challenges. We evaluated our method’s performance in classifying NSCLC subtypes using data from multiple institutions while preserving privacy. Our experiments compared the proposed federated learning approach with triplet loss against alternative methods, including local training and softmax loss. Results demonstrated that our federated learning approach with triplet loss consistently outperformed other methods across key metrics. The combination of federated learning and triplet loss showed synergistic effects, leveraging external datasets to improve model performance while maintaining data confidentiality. The source code for the implementation described in this paper is available at https://github.com/aksufatih/federated-triplet-histology.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15615
Keywords
Federated learning, Histology classification, Medical image classification, Triplet networks, Virtual biopsy
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-239122 (URN)10.1007/978-3-031-87660-8_12 (DOI)2-s2.0-105004795385 (Scopus ID)978-3-031-87659-2 (ISBN)978-3-031-87660-8 (ISBN)
Conference
27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024
Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2025-05-27Bibliographically approved
Paolo, D., Greco, C., Cortellini, A., Ramella, S., Soda, P., Bria, A. & Sicilia, R. (2025). Hierarchical embedding attention for overall survival prediction in lung cancer from unstructured EHRs. BMC Medical Informatics and Decision Making, 25(1), Article ID 169.
Open this publication in new window or tab >>Hierarchical embedding attention for overall survival prediction in lung cancer from unstructured EHRs
Show others...
2025 (English)In: BMC Medical Informatics and Decision Making, E-ISSN 1472-6947, Vol. 25, no 1, article id 169Article in journal (Refereed) Published
Abstract [en]

The automated processing of Electronic Health Records (EHRs) poses a significant challenge due to their unstructured nature, rich in valuable, yet disorganized information. Natural Language Processing (NLP), particularly Named Entity Recognition (NER), has been instrumental in extracting structured information from EHR data. However, existing literature primarly focuses on extracting handcrafted clinical features through NLP and NER methods without delving into their learned representations. In this work, we explore the untapped potential of these representations by considering their contextual richness and entity-specific information. Our proposed methodology extracts representations generated by a transformer-based NER model on EHRs data, combines them using a hierarchical attention mechanism, and employs the obtained enriched representation as input for a clinical prediction model. Specifically, this study addresses Overall Survival (OS) in Non-Small Cell Lung Cancer (NSCLC) using unstructured EHRs data collected from an Italian clinical centre encompassing 838 records from 231 lung cancer patients. Whilst our study is applied on EHRs written in Italian, it serves as use case to prove the effectiveness of extracting and employing high level textual representations that capture relevant information as named entities. Our methodology is interpretable because the hierarchical attention mechanism highlights the information in EHRs that the model considers the most crucial during the decision-making process. We validated this interpretability by measuring the agreement of domain experts on the importance assigned by the hierarchical attention mechanism to EHRs information through a questionnaire. Results demonstrate the effectiveness of our method, showcasing statistically significant improvements over traditional manually extracted clinical features.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Attention mechanism, Lung cancer, NER, Survival analysis, Transformer, Unstructured EHRs
National Category
Other Health Sciences
Identifiers
urn:nbn:se:umu:diva-238588 (URN)10.1186/s12911-025-02998-6 (DOI)001469748100003 ()40251623 (PubMedID)2-s2.0-105003252306 (Scopus ID)
Available from: 2025-05-19 Created: 2025-05-19 Last updated: 2025-05-19Bibliographically approved
Aksu, F., Gelardi, F., Chiti, A. & Soda, P. (2025). Multi-stage intermediate fusion for multimodal learning to classify non-small cell lung cancer subtypes from CT and PET. Pattern Recognition Letters, 193, 86-93
Open this publication in new window or tab >>Multi-stage intermediate fusion for multimodal learning to classify non-small cell lung cancer subtypes from CT and PET
2025 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 193, p. 86-93Article in journal (Refereed) Published
Abstract [en]

Accurate classification of histological subtypes of non-small cell lung cancer (NSCLC) is essential in the era of precision medicine, yet current invasive techniques are not always feasible and may lead to clinical complications. This study presents MINT, a Multi-stage INTermediate fusion approach to classify NSCLC subtypes from CT and PET images. Our method integrates the two modalities at different stages of feature extraction, using voxel-wise fusion to exploit complementary information across varying abstraction levels while preserving spatial correlations. We compare our method against unimodal approaches using only CT or PET images to demonstrate the benefits of modality fusion, and further benchmark it against early and late fusion techniques to highlight the advantages of intermediate fusion during feature extraction. Additionally, we compare our model with the only existing intermediate fusion method for histological subtype classification using PET/CT images. Our results demonstrate that the proposed method outperforms all alternatives across key metrics, with an accuracy and AUC equal to 0.724 and 0.681, respectively. This non-invasive approach has the potential to significantly improve diagnostic accuracy, facilitate more informed treatment decisions, and advance personalized care in lung cancer management.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Histological subtype classification, Intermediate fusion, Medical image analysis, Multi-stage fusion, Multimodal deep learning
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:umu:diva-238625 (URN)10.1016/j.patrec.2025.04.001 (DOI)2-s2.0-105003541793 (Scopus ID)
Funder
Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973
Available from: 2025-05-09 Created: 2025-05-09 Last updated: 2025-05-09Bibliographically approved
Paolo, D., Russo, C., Russo, G., Greco, C., Cortellini, A., Russano, M., . . . Sicilia, R. (2025). Pathologic complete response prediction with machine learning using hierarchical attention feature extraction. In: Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya (Ed.), Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II. Paper presented at 27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024 (pp. 255-267). Springer Nature
Open this publication in new window or tab >>Pathologic complete response prediction with machine learning using hierarchical attention feature extraction
Show others...
2025 (English)In: Pattern Recognition. ICPR 2024 International Workshops and Challenges: Kolkata, India, December 1, 2024, Proceedings, Part II / [ed] Shivakumara Palaiahnakote; Stephanie Schuckers; Jean-Marc Ogier; Prabir Bhattacharya; Umapada Pal; Saumik Bhattacharya, Springer Nature, 2025, p. 255-267Conference paper, Published paper (Refereed)
Abstract [en]

Predicting pathologic complete response in non-small cell lung cancer is crucial for tailoring effective treatment strategies and to improve patient outcomes. With the increasing application of artificial intelligence in cancer research, machine learning is poised to play a significant role in prognostication and decision-making. This paper presents a novel approach that utilizes named entity recognition and attention mechanisms applied to electronic health records to predict the pathologic complete response. We first employ named entity recognition to extract relevant biomedical entities from unstructured clinical notes within reports. These entities, combined with structured data, are then processed using a hierarchical attention mechanism to generate comprehensive patient representations. This approach captures complex relationships and contextual information within electronic health records compared to traditional methods. The results highlight the potential of advanced natural language processing techniques to enhance clinical decision-making and support personalized treatment planning in oncology.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15615
Keywords
NLP, NSCLC, pCR
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-239120 (URN)10.1007/978-3-031-87660-8_19 (DOI)2-s2.0-105004791003 (Scopus ID)978-3-031-87659-2 (ISBN)978-3-031-87660-8 (ISBN)
Conference
27th International Conference on Pattern Recognition, ICPR 2024, Kolkata, India, December 1, 2024
Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2025-05-27Bibliographically approved
Caruso, C. M., Guarrasi, V., Ramella, S. & Soda, P. (2024). A deep learning approach for overall survival prediction in lung cancer with missing values. Computer Methods and Programs in Biomedicine, 254, Article ID 108308.
Open this publication in new window or tab >>A deep learning approach for overall survival prediction in lung cancer with missing values
2024 (English)In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 254, article id 108308Article in journal (Refereed) Published
Abstract [en]

Background and Objective: In the field of lung cancer research, particularly in the analysis of overall survival (OS), artificial intelligence (AI) serves crucial roles with specific aims. Given the prevalent issue of missing data in the medical domain, our primary objective is to develop an AI model capable of dynamically handling this missing data. Additionally, we aim to leverage all accessible data, effectively analyzing both uncensored patients who have experienced the event of interest and censored patients who have not, by embedding a specialized technique within our AI model, not commonly utilized in other AI tasks. Through the realization of these objectives, our model aims to provide precise OS predictions for non-small cell lung cancer (NSCLC) patients, thus overcoming these significant challenges.

Methods: We present a novel approach to survival analysis with missing values in the context of NSCLC, which exploits the strengths of the transformer architecture to account only for available features without requiring any imputation strategy. More specifically, this model tailors the transformer architecture to tabular data by adapting its feature embedding and masked self-attention to mask missing data and fully exploit the available ones. By making use of ad-hoc designed losses for OS, it is able to account for both censored and uncensored patients, as well as changes in risks over time.

Results: We compared our method with state-of-the-art models for survival analysis coupled with different imputation strategies. We evaluated the results obtained over a period of 6 years using different time granularities obtaining a Ct-index, a time-dependent variant of the C-index, of 71.97, 77.58 and 80.72 for time units of 1 month, 1 year and 2 years, respectively, outperforming all state-of-the-art methods regardless of the imputation method used.

Conclusions: The results show that our model not only outperforms the state-of-the-art's performance but also simplifies the analysis in the presence of missing data, by effectively eliminating the need to identify the most appropriate imputation strategy for predicting OS in NSCLC patients.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Missing data, Oncology, Precision medicine, Survival analysis
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:umu:diva-227835 (URN)10.1016/j.cmpb.2024.108308 (DOI)001266463400001 ()2-s2.0-85197293687 (Scopus ID)
Funder
Swedish National Infrastructure for Computing (SNIC)Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973
Available from: 2024-07-11 Created: 2024-07-11 Last updated: 2025-04-24Bibliographically approved
Rofena, A., Guarrasi, V., Sarli, M., Piccolo, C. L., Sammarra, M., Zobel, B. B. & Soda, P. (2024). A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography. Computerized Medical Imaging and Graphics, 116, Article ID 102398.
Open this publication in new window or tab >>A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography
Show others...
2024 (English)In: Computerized Medical Imaging and Graphics, ISSN 0895-6111, E-ISSN 1879-0771, Vol. 116, article id 102398Article in journal (Refereed) Published
Abstract [en]

Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model's performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
CESM, Contrast enhanced spectral mammography, Generative adversarial network, Image-to-image translation, Virtual contrast enhancement
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Imaging
Identifiers
urn:nbn:se:umu:diva-225498 (URN)10.1016/j.compmedimag.2024.102398 (DOI)001246554400001 ()38810487 (PubMedID)2-s2.0-85194331808 (Scopus ID)
Funder
Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973Swedish National Infrastructure for Computing (SNIC)National Academic Infrastructure for Supercomputing in Sweden (NAISS)
Available from: 2024-06-10 Created: 2024-06-10 Last updated: 2025-04-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2621-072X

Search in DiVA

Show all publications