Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward a multimodal deep learning approach for histological subtype classification in NSCLC
Humanitas University, Department of Biomedical Sciences, Milan, Italy.
Università Vita-Salute San Raffaele, Milan, Italy; Irccs Ospedale San Raffaele, Milan, Italy.
Università Vita-Salute San Raffaele, Milan, Italy; Irccs Ospedale San Raffaele, Milan, Italy.
Umeå University, Faculty of Medicine, Department of Diagnostics and Intervention. Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. Università Campus Bio-Medico di Roma, Department of Engineering, Rome, Italy.ORCID iD: 0000-0003-2621-072X
2024 (English)In: 2024 IEEE international conference on bioinformatics and biomedicine (BIBM) / [ed] Mario Cannataro; Huiru (Jane) Zheng; Lin Gao; Jianlin (Jack) Cheng; João Luís de Miranda; Ester Zumpano; Xiaohua Hu; Young-Rae Cho; Taesung Park, IEEE, 2024, p. 6327-6333Conference paper, Published paper (Refereed)
Abstract [en]

Accurate classification of non-small cell lung cancer (NSCLC) subtypes is crucial for implementing effective, personalized treatment strategies. This study introduces a novel 3D multimodal convolutional neural network (CNN) architecture for histological subtype classification of NSCLC, specifically distinguishing between lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). In the context of multimodal deep learning, our approach employs an intermediate fusion technique to integrate both PET and CT imaging modalities, leveraging the complementary information provided by each. We utilized two public datasets alongside a private one, encompassing a total of 714 patients. Our multimodal method was compared against unimodal methods using either CT or PET images alone, achieving better performance. Interestingly, we found that integrating information from multiple imaging modalities can lead to more accurate and reliable NSCLC subtype classification also in case of skewed a priori sample distributions. This non-invasive method has the potential to enhance diagnostic accuracy, improve treatment decisions, and contribute to more personalized and effective lung cancer care strategies.

Place, publisher, year, edition, pages
IEEE, 2024. p. 6327-6333
Series
Proceedings (IEEE International Conference on Bioinformatics and Biomedicine), ISSN 2156-1133, E-ISSN 2156-1125
Keywords [en]
artificial intelligence, intermediate fusion, joint fusion, medical imaging, virtual biopsy
National Category
Cancer and Oncology Artificial Intelligence Radiology and Medical Imaging
Identifiers
URN: urn:nbn:se:umu:diva-235666DOI: 10.1109/BIBM62325.2024.10822421Scopus ID: 2-s2.0-85217281496ISBN: 9798350386226 (electronic)ISBN: 9798350386233 (print)OAI: oai:DiVA.org:umu-235666DiVA, id: diva2:1940454
Conference
2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024
Note

The source code for the implementation described in this paper is available at https://github.com/aksufatih/multimodal-histology-classification

Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-02-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Soda, Paolo

Search in DiVA

By author/editor
Soda, Paolo
By organisation
Department of Diagnostics and InterventionRadiation Physics
Cancer and OncologyArtificial IntelligenceRadiology and Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 44 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf