Umeå universitets logga

umu.sePublikationer
Driftstörningar
Just nu har vi driftstörningar på sök-portalerna på grund av hög belastning. Vi arbetar på att lösa problemet, ni kan tillfälligt mötas av ett felmeddelande.
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • apa-6th-edition.csl
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Contributions to deep learning for imaging in radiotherapy
Umeå universitet, Medicinska fakulteten, Institutionen för strålningsvetenskaper, Radiofysik.ORCID-id: 0000-0002-6321-8117
2023 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)Alternativ titel
Bidrag till djupinlärning för bildbehandling inom strålbehandling (Svenska)
Abstract [en]

Purpose: The increasing importance of medical imaging in cancer treatment, combined with the growing popularity of deep learning gave relevance to the presented contributions to deep learning solutions with applications in medical imaging.

Relevance: The projects aim to improve the efficiency of MRI for automated tasks related to radiotherapy, building on recent advancements in the field of deep learning.

Approach: Our implementations are built on recently developed deep learning methodologies, while introducing novel approaches in the main aspects of deep learning, with regards to physics-informed augmentations and network architectures, and implicit loss functions. To make future comparisons easier, we often evaluated our methods on public datasets, and made all solutions publicly available.

Results: The results of the collected projects include the development of robust models for MRI bias field correction, artefact removal, contrast transfer and sCT generation. Furthermore, the projects stress the importance of reproducibility in deep learning research and offer guidelines for creating transparent and usable code repositories.

Conclusions: Our results collectively build the position of deep learning in the field of medical imaging. The projects offer solutions that are both novel and aim to be highly applicable, while emphasizing generalization towards a wide variety of data and the transparency of the results.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2023. , s. 100
Serie
Umeå University medical dissertations, ISSN 0346-6612 ; 2264
Nyckelord [en]
deep learning, medical imaging, radiotherapy, artefact correction, bias field correction, contrast transfer, synthetic CT, reproducibility
Nationell ämneskategori
Radiologi och bildbehandling
Identifikatorer
URN: urn:nbn:se:umu:diva-215693ISBN: 9789180701945 (tryckt)ISBN: 9789180701952 (digital)OAI: oai:DiVA.org:umu-215693DiVA, id: diva2:1807216
Disputation
2024-01-26, E04, Norrlands universitetssjukhus, Umeå, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2023-11-08 Skapad: 2023-10-25 Senast uppdaterad: 2024-07-02Bibliografiskt granskad
Delarbeten
1. Improving MR image quality with a multi-task model, using convolutional losses
Öppna denna publikation i ny flik eller fönster >>Improving MR image quality with a multi-task model, using convolutional losses
Visa övriga...
2023 (Engelska)Ingår i: BMC Medical Imaging, E-ISSN 1471-2342, Vol. 23, nr 1, artikel-id 148Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

PURPOSE: During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored.

METHODS: In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test.

RESULTS: Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality.

CONCLUSION: We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.

Ort, förlag, år, upplaga, sidor
BioMed Central (BMC), 2023
Nyckelord
Image artefact correction, Machine learning, Magnetic resonance imaging
Nationell ämneskategori
Radiologi och bildbehandling
Identifikatorer
urn:nbn:se:umu:diva-215277 (URN)10.1186/s12880-023-01109-z (DOI)37784039 (PubMedID)2-s2.0-85173046817 (Scopus ID)
Forskningsfinansiär
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region Västerbotten
Tillgänglig från: 2023-10-17 Skapad: 2023-10-17 Senast uppdaterad: 2024-07-04Bibliografiskt granskad
2. MRI bias field correction with an implicitly trained CNN
Öppna denna publikation i ny flik eller fönster >>MRI bias field correction with an implicitly trained CNN
Visa övriga...
2022 (Engelska)Ingår i: Proceedings of the 5th international conference on medical imaging with deep learning / [ed] Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni, ML Research Press , 2022, s. 1125-1138Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In magnetic resonance imaging (MRI), bias fields are difficult to correct since they are inherently unknown. They cause intra-volume intensity inhomogeneities which limit the performance of subsequent automatic medical imaging tasks, \eg, tissue-based segmentation. Since the ground truth is unavailable, training a supervised machine learning solution requires approximating the bias fields, which limits the resulting method. We introduce implicit training which sidesteps the inherent lack of data and allows the training of machine learning solutions without ground truth. We describe how training a model implicitly for bias field correction allows using non-medical data for training, achieving a highly generalized model. The implicit approach was compared to a more traditional training based on medical data. Both models were compared to an optimized N4ITK method, with evaluations on six datasets. The implicitly trained model improved the homogeneity of all encountered medical data, and it generalized better for a range of anatomies, than the model trained traditionally. The model achieves a significant speed-up over an optimized N4ITK method—by a factor of 100, and after training, it also requires no parameters to tune. For tasks such as bias field correction - where ground truth is generally not available, but the characteristics of the corruption are known - implicit training promises to be a fruitful alternative for highly generalized solutions.

Ort, förlag, år, upplaga, sidor
ML Research Press, 2022
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 172
Nyckelord
Self-supervised learning, Implicit Training, Magnetic Resonance Imaging, Bias Field Correction, Image Restoration
Nationell ämneskategori
Radiologi och bildbehandling Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:umu:diva-205226 (URN)2-s2.0-85169103625 (Scopus ID)
Konferens
International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022
Tillgänglig från: 2023-02-27 Skapad: 2023-02-27 Senast uppdaterad: 2025-02-01Bibliografiskt granskad
3. Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning
Öppna denna publikation i ny flik eller fönster >>Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning
Visa övriga...
2021 (Engelska)Ingår i: Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR / [ed] Mattias Heinrich; Qi Dou; Marleen de Bruijne; Jan Lellmann; Alexander Schläfer; Floris Ernst, Lübeck University; Hamburg University of Technology , 2021, Vol. 143, s. 713-727Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

 The contrast settings to select before acquiring magnetic resonance imaging (MRI) signal depend heavily on the subsequent tasks. As each contrast highlights different tissues, automated segmentation tools for example might be optimized for a certain contrast. While for radiotherapy, multiple scans of the same region with different contrasts can achieve a better accuracy for delineating tumours and organs at risk. Unfortunately, the optimal contrast for the subsequent automated methods might not be known during the time of signal acquisition, and performing multiple scans with different contrasts increases the total examination time and registering the sequences introduces extra work and potential errors. Building on the recent achievements of deep learning in medical applications, the presented work describes a novel approach for transferring any contrast to any other. The novel model architecture incorporates the signal equation for spin echo sequences, and hence the model inherently learns the unknown quantitative maps for proton density, 𝑇1 and 𝑇2 relaxation times (𝑃𝐷, 𝑇1 and 𝑇2, respectively). This grants the model the ability to retrospectively reconstruct spin echo sequences by changing the contrast settings Echo and Repetition Time (𝑇𝐸 and 𝑇𝑅, respectively). The model learns to identify the contrast of pelvic MR images, therefore no paired data of the same anatomy from different contrasts is required for training. This means that the experiments are easily reproducible with other contrasts or other patient anatomies. Despite the contrast of the input image, the model achieves accurate results for reconstructing signal with contrasts available for evaluation. For the same anatomy, the quantitative maps are consistent for a range of contrasts of input images. Realized in practice, the proposed method would greatly simplify the modern radiotherapy pipeline. The trained model is made public together with a tool for testing the model on example images. 

Ort, förlag, år, upplaga, sidor
Lübeck University; Hamburg University of Technology, 2021
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498
Nationell ämneskategori
Radiologi och bildbehandling Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:umu:diva-190497 (URN)2-s2.0-85162848187 (Scopus ID)
Konferens
Medical Imaging with Deep Learning (MIDL), Online, 7-9 July, 2021.
Tillgänglig från: 2021-12-16 Skapad: 2021-12-16 Senast uppdaterad: 2025-02-01Bibliografiskt granskad
4. Towards MR contrast independent synthetic CT generation
Öppna denna publikation i ny flik eller fönster >>Towards MR contrast independent synthetic CT generation
Visa övriga...
2024 (Engelska)Ingår i: Zeitschrift für Medizinische Physik, ISSN 0939-3889, E-ISSN 1876-4436, Vol. 34, nr 2, s. 270-277Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.

To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

Ort, förlag, år, upplaga, sidor
Elsevier, 2024
Nyckelord
MRI contrast, Robust machine learning, Synthetic CT generation
Nationell ämneskategori
Datavetenskap (datalogi) Radiologi och bildbehandling
Identifikatorer
urn:nbn:se:umu:diva-214270 (URN)10.1016/j.zemedi.2023.07.001 (DOI)001246727700001 ()37537099 (PubMedID)2-s2.0-85169824488 (Scopus ID)
Forskningsfinansiär
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region VästerbottenSwedish National Infrastructure for Computing (SNIC)
Tillgänglig från: 2023-09-12 Skapad: 2023-09-12 Senast uppdaterad: 2024-07-04Bibliografiskt granskad
5. Reproducibility of the methods in medical imaging with deep learning
Öppna denna publikation i ny flik eller fönster >>Reproducibility of the methods in medical imaging with deep learning
Visa övriga...
2023 (Engelska)Ingår i: Medical imaging with deep learning 2023 / [ed] Ipek Oguz; Jack Noble; Xiaoxiao Li; Martin Styner; Christian Baumgartner; Mirabela Rusu; Tobias Heinmann; Despina Kontos; Bennett Landman; Benoit Dawant, ML Research Press , 2023, s. 95-106Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Concerns about the reproducibility of deep learning research are more prominent than ever, with no clear solution in sight. The Medical Imaging with Deep Learning (MIDL) conference has made advancements in employing empirical rigor with regards to reproducibility by advocating open access, and recently also recommending authors to make their code public—both aspects being adopted by the majority of the conference submissions. We have evaluated all accepted full paper submissions to MIDL between 2018 and 2022 using established, but adjusted guidelines addressing the reproducibility and quality of the public repositories. The evaluations show that publishing repositories and using public datasets are becoming more popular, which helps traceability, but the quality of the repositories shows room for improvement in every aspect. Merely 22% of all submissions contain a repository that was deemed repeatable using our evaluations. From the commonly encountered issues during the evaluations, we propose a set of guidelines for machine learning-related research for medical imaging applications, adjusted specifically for future submissions to MIDL. We presented our results to future MIDL authors who were eager to continue an open discussion on the topic of code reproducibility.

Ort, förlag, år, upplaga, sidor
ML Research Press, 2023
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 227
Nyckelord
Reproducibility, Reproducibility of the Methods, Deep Learning, Medical Imaging, Open Science, Transparent Research
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-215692 (URN)2-s2.0-85189322413 (Scopus ID)
Konferens
Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023
Anmärkning

Originally included in thesis in manuscript form. 

Tillgänglig från: 2023-10-25 Skapad: 2023-10-25 Senast uppdaterad: 2024-07-02Bibliografiskt granskad

Open Access i DiVA

fulltext(4493 kB)363 nedladdningar
Filinformation
Filnamn FULLTEXT03.pdfFilstorlek 4493 kBChecksumma SHA-512
df0b3312aa71d2b749692270450134e338c6b36d7b0e940c53e4ce8fb0f572afb45a49d7c06fdadeba06ca0f96b24126112250ff618830e49231607c51e2cb59
Typ fulltextMimetyp application/pdf
spikblad(147 kB)55 nedladdningar
Filinformation
Filnamn FULLTEXT04.pdfFilstorlek 147 kBChecksumma SHA-512
cc2215fe11075270734aeb09a105d270cb751bcf8f42e2bd1cb6af2c2dd348183d58b9f967325569ca261597fd1a7d0ec667d8d426b4414fc77c4ff118151300
Typ spikbladMimetyp application/pdf

Övriga länkar

Supplementary material

Person

Simkó, Attila

Sök vidare i DiVA

Av författaren/redaktören
Simkó, Attila
Av organisationen
Radiofysik
Radiologi och bildbehandling

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 418 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 770 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • apa-6th-edition.csl
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf