Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
TuNet: End-to-End Hierarchical Brain Tumor Segmentation Using Cascaded Networks
Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics.ORCID iD: 0000-0002-2391-1419
Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics.ORCID iD: 0000-0002-8971-9788
Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics.ORCID iD: 0000-0001-7119-7646
2020 (English)In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part I / [ed] Alessandro Crimi and Spyridon Bakas, Cham: Springer, 2020, p. 174-186Conference paper, Published paper (Refereed)
Abstract [en]

Glioma is one of the most common types of brain tumors; it arises in the glial cells in the human brain and in the spinal cord. In addition to having a high mortality rate, glioma treatment is also very expensive. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of the treatment. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2019 that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online test set. The proposed method was ranked among the top in the task of Quantification of Uncertainty in Segmentation.

Place, publisher, year, edition, pages
Cham: Springer, 2020. p. 174-186
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11992
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:umu:diva-177257DOI: 10.1007/978-3-030-46640-4_17ISI: 000892506100017Scopus ID: 2-s2.0-85085520133ISBN: 978-3-030-46639-8 (print)ISBN: 978-3-030-46640-4 (electronic)OAI: oai:DiVA.org:umu-177257DiVA, id: diva2:1506396
Conference
MICCAI 2019, Shenzhen, China, October 17, 2019
Note

Also part of the Image Processing, Computer Vision, Pattern Recognition, and Graphics book sub series (LNIP, volume 11992)

Available from: 2020-12-03 Created: 2020-12-03 Last updated: 2025-02-01Bibliographically approved
In thesis
1. Resource efficient automatic segmentation of medical images
Open this publication in new window or tab >>Resource efficient automatic segmentation of medical images
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Resurseffektiv automatisk medicinsk bildsegmentering
Abstract [en]

Cancer is one of the leading causes of death worldwide. In 2020, there were around 10 million cancer deaths and nearly 20 million new cancer cases in the world. Radiation therapy is essential in cancer treatments because half of the cancer patients receive radiation therapy at some point. During a radiotherapy treatment planning (RTP), an oncologist must manually outline two types of areas of the patient’s body: target, which will be treated, and organs-at-risks (OARs), which are essential to avoid. This step is called delineation. The purpose of the delineation is to generate a sufficient dose plan that can provide adequate radiation dose to a tumor and limit the radiation exposure to healthy tissue. Therefore, accurate delineations are essential to achieve this goal.

Delineation is tedious and demanding for oncologists because it requires hours of concentrating work doing a repeated job. This is a RTP bottleneck which is often time- and resource-intensive. Current software, such as atlasbased techniques, can assist with this procedure by registering the patient’s anatomy to a predetermined anatomical map. However, the atlas-based methods are often slowed down and erroneous for patients with abnormal anatomies.

In recent years, deep learning (DL) methods, particularly convolutional neural networks (CNNs), have led to breakthroughs in numerous medical imaging applications. The core benefits of CNNs are weight sharing and that they can automatically detect important visual features. A typical application of CNNs for medical images is to automatically segment tumors, organs, and structures, which is assumed to save radiation oncologists much time when delineating. This thesis contributes to resource efficient automatic segmentation and covers different aspects of resource efficiency.

In Paper I, we proposed a novel end-to-end cascaded network for semantic segmentation in brain tumors in the multi-modal magnetic resonance imaging challenge in 2019. The proposed method used the hierarchical structure of the tumor sub-regions and was one of the top-ranking teams in the task of quantification of uncertainty in segmentation. A follow-up work to this paper was ranked second in the same task in the same challenge a year later.

We systematically assessed the segmentation performance and computational costs of the technique called pseudo-3D as a function of the number of input slices in Paper II. We compared the results to typical two-dimensional (2D) and three-dimensional (3D) CNNs and a method called triplanar orthogonal 2D. The typical pseudo-3D approach considers adjacent slices to be several image input channels. We discovered that a substantial benefit from employing multiple input slices was apparent for a specific input size.

We introduced a novel loss function in Paper III to address diverse issues, including imbalanced datasets, partially labeled data, and incremental learning. The proposed loss function adjusts to the given data to use all accessible data, even if some lack annotations. We show that the suggested loss function also performs well in an incremental learning context, where an existing model can be modified to incorporate the delineations of newly appearing organs semi-automatically.

In Paper IV, we proposed a novel method for compressing high-dimensional activation maps, which are the primary source of memory use in modern systems. We examined three distinct compression methods for the activation maps to accomplishing this. We demonstrated that the proposed method induces a regularization effect that acts on the layer weight gradients. By employing the proposed technique, we reduced activation map memory usage by up to 95%.

We investigated the use of generative adversarial networks (GANs) to enlarge a small dataset by generating synthetic images in Paper V. We use the real and generated data during training CNNs for the downstream segmentation tasks. Inspired by an existing GAN, we proposed a conditional version to generate high-dimensional and high-quality medical images of different modalities and their corresponding label maps. We evaluated the quality of the generated medical images and the effect of this augmentation on the performance of the segmentation task on six datasets.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2023. p. 101
Series
Umeå University medical dissertations, ISSN 0346-6612 ; 2219
Keywords
radiotherapy, medical imaging, deep learning, convolutional neural network, generative adversarial network, data augmentation, semantic segmentation, classification, activation map compression
National Category
Computer Sciences Radiology, Nuclear Medicine and Medical Imaging Medical Imaging
Research subject
radiation physics
Identifiers
urn:nbn:se:umu:diva-203993 (URN)978-91-7855-956-5 (ISBN)978-91-7855-957-2 (ISBN)
Public defence
2023-02-24, Bergasalen, Byggnad 27, Entré Syd, Norrlands universitetssjukhus (NUS), Umeå, 09:00 (English)
Opponent
Supervisors
Available from: 2023-02-03 Created: 2023-01-24 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Vu, Minh HoangNyholm, TufveLöfstedt, Tommy

Search in DiVA

By author/editor
Vu, Minh HoangNyholm, TufveLöfstedt, Tommy
By organisation
Radiation Physics
Radiology, Nuclear Medicine and Medical ImagingComputer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 354 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf