Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Nyholm, Tufve
Alternative names
Publications (10 of 120) Show all publications
Sandgren, K., Strandberg, S., Jonsson, J., Grefve, J., Keeratijarut Lindberg, A., Nilsson, E., . . . Riklund, K. (2023). Histopathology-validated lesion detection rates of clinically significant prostate cancer with mpMRI, [68Ga]PSMA-11-PET and [11C]Acetate-PET. Nuclear medicine communications, 44(11), 997-1004
Open this publication in new window or tab >>Histopathology-validated lesion detection rates of clinically significant prostate cancer with mpMRI, [68Ga]PSMA-11-PET and [11C]Acetate-PET
Show others...
2023 (English)In: Nuclear medicine communications, ISSN 0143-3636, E-ISSN 1473-5628, Vol. 44, no 11, p. 997-1004Article in journal (Refereed) Published
Abstract [en]

Objective: PET/CT and multiparametric MRI (mpMRI) are important diagnostic tools in clinically significant prostate cancer (csPC). The aim of this study was to compare csPC detection rates with [68Ga]PSMA-11-PET (PSMA)-PET, [11C] Acetate (ACE)-PET, and mpMRI with histopathology as reference, to identify the most suitable imaging modalities for subsequent hybrid imaging. An additional aim was to compare inter-reader variability to assess reproducibility.

Methods: During 2016–2019, all study participants were examined with PSMA-PET/mpMRI and ACE-PET/CT prior to radical prostatectomy. PSMA-PET, ACE-PET and mpMRI were evaluated separately by two observers, and were compared with histopathology-defined csPC. Statistical analyses included two-sided McNemar test and index of specific agreement.

Results: Fifty-five study participants were included, with 130 histopathological intraprostatic lesions >0.05 cc. Of these, 32% (42/130) were classified as csPC with ISUP grade ≥2 and volume >0.5 cc. PSMA-PET and mpMRI showed no difference in performance (P = 0.48), with mean csPC detection rate of 70% (29.5/42) and 74% (31/42), respectively, while with ACE-PET the mean csPC detection rate was 37% (15.5/42). Interobserver agreement was higher with PSMA-PET compared to mpMRI [79% (26/33) vs 67% (24/38)]. Including all detected lesions from each pair of observers, the detection rate increased to 90% (38/42) with mpMRI, and 79% (33/42) with PSMA-PET.

Conclusion: PSMA-PET and mpMRI showed high csPC detection rates and superior performance compared to ACE-PET. The interobserver agreement indicates higher reproducibility with PSMA-PET. The combined result of all observers in both PSMA-PET and mpMRI showed the highest detection rate, suggesting an added value of a hybrid imaging approach.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2023
Keywords
acetate-PET, detection rate, intraprostatic lesion, multiparametric MRI, prostate cancer, PSMA-PET
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-216125 (URN)10.1097/MNM.0000000000001743 (DOI)001083841200009 ()37615497 (PubMedID)2-s2.0-85174936230 (Scopus ID)
Funder
Swedish Cancer SocietyVästerbotten County Council
Available from: 2023-11-06 Created: 2023-11-06 Last updated: 2023-11-06Bibliographically approved
Björeland, U., Notstam, K., Fransson, P., Söderkvist, K., Beckman, L., Jonsson, J., . . . Thellenberg-Karlsson, C. (2023). Hyaluronic acid spacer in prostate cancer radiotherapy: dosimetric effects, spacer stability and long-term toxicity and PRO in a phase II study. Radiation Oncology, 18(1), Article ID 1.
Open this publication in new window or tab >>Hyaluronic acid spacer in prostate cancer radiotherapy: dosimetric effects, spacer stability and long-term toxicity and PRO in a phase II study
Show others...
2023 (English)In: Radiation Oncology, ISSN 1748-717X, E-ISSN 1748-717X, Vol. 18, no 1, article id 1Article in journal (Refereed) Published
Abstract [en]

BACKGROUND: Perirectal spacers may be beneficial to reduce rectal side effects from radiotherapy (RT). Here, we present the impact of a hyaluronic acid (HA) perirectal spacer on rectal dose as well as spacer stability, long-term gastrointestinal (GI) and genitourinary (GU) toxicity and patient-reported outcome (PRO).

METHODS: In this phase II study 81 patients with low- and intermediate-risk prostate cancer received transrectal injections with HA before external beam RT (78 Gy in 39 fractions). The HA spacer was evaluated with MRI four times; before (MR0) and after HA-injection (MR1), at the middle (MR2) and at the end (MR3) of RT. GI and GU toxicity was assessed by physician for up to five years according to the RTOG scale. PROs were collected using the Swedish National Prostate Cancer Registry and Prostate cancer symptom scale questionnaires.

RESULTS: There was a significant reduction in rectal V70% (54.6 Gy) and V90% (70.2 Gy) between MR0 and MR1, as well as between MR0 to MR2 and MR3. From MR1 to MR2/MR3, HA thickness decreased with 28%/32% and CTV-rectum space with 19%/17% in the middle level. The cumulative late grade ≥ 2 GI toxicity at 5 years was 5% and the proportion of PRO moderate or severe overall bowel problems at 5 years follow-up was 12%. Cumulative late grade ≥ 2 GU toxicity at 5 years was 12% and moderate or severe overall urinary problems at 5 years were 10%.

CONCLUSION: We show that the HA spacer reduced rectal dose and long-term toxicity.

Place, publisher, year, edition, pages
BioMed Central (BMC), 2023
Keywords
Hyaluronic Acid, Prostate cancer, Radiotherapy, Rectal toxicity
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:umu:diva-203799 (URN)10.1186/s13014-022-02197-x (DOI)000906713000001 ()36593460 (PubMedID)2-s2.0-85145492354 (Scopus ID)
Funder
Region VästernorrlandCancerforskningsfonden i NorrlandVisare Norr
Available from: 2023-01-20 Created: 2023-01-20 Last updated: 2023-09-05Bibliographically approved
Simkó, A., Ruiter, S., Löfstedt, T., Garpebring, A., Nyholm, T., Bylund, M. & Jonsson, J. (2023). Improving MR image quality with a multi-task model, using convolutional losses. BMC Medical Imaging, 23(1), Article ID 148.
Open this publication in new window or tab >>Improving MR image quality with a multi-task model, using convolutional losses
Show others...
2023 (English)In: BMC Medical Imaging, ISSN 1471-2342, E-ISSN 1471-2342, Vol. 23, no 1, article id 148Article in journal (Refereed) Published
Abstract [en]

PURPOSE: During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored.

METHODS: In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test.

RESULTS: Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality.

CONCLUSION: We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.

Place, publisher, year, edition, pages
BioMed Central (BMC), 2023
Keywords
Image artefact correction, Machine learning, Magnetic resonance imaging
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-215277 (URN)10.1186/s12880-023-01109-z (DOI)37784039 (PubMedID)2-s2.0-85173046817 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region Västerbotten
Available from: 2023-10-17 Created: 2023-10-17 Last updated: 2023-10-25Bibliographically approved
Kaushik, S. S., Bylund, M., Cozzini, C., Shanbhag, D., Petit, S. F., Wyatt, J. J., . . . Menze, B. (2023). Region of interest focused MRI to synthetic CT translation using regression and segmentation multi-task network. Physics in Medicine and Biology, 68(19), Article ID 195003.
Open this publication in new window or tab >>Region of interest focused MRI to synthetic CT translation using regression and segmentation multi-task network
Show others...
2023 (English)In: Physics in Medicine and Biology, ISSN 0031-9155, E-ISSN 1361-6560, Vol. 68, no 19, article id 195003Article in journal (Refereed) Published
Abstract [en]

Objective: In MR-only clinical workflow, replacing CT with MR image is of advantage for workflow efficiency and reduces radiation to the patient. An important step required to eliminate CT scan from the workflow is to generate the information provided by CT via an MR image. In this work, we aim to demonstrate a method to generate accurate synthetic CT (sCT) from an MR image to suit the radiation therapy (RT) treatment planning workflow. We show the feasibility of the method and make way for a broader clinical evaluation.

Approach: We present a machine learning method for sCT generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. The misestimation of bone density in the radiation path could lead to unintended dose delivery to the target volume and results in suboptimal treatment outcome. We propose a loss function that favors a spatially sparse bone region in the image. We harness the ability of the multi-task network to produce correlated outputs as a framework to enable localization of region of interest (RoI) via segmentation, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task.

Main results: We have included 54 brain patient images in this study and tested the sCT images against reference CT on a subset of 20 cases. A pilot dose evaluation was performed on 9 of the 20 test cases to demonstrate the viability of the generated sCT in RT planning. The average quantitative metrics produced by the proposed method over the test set were-(a) mean absolute error (MAE) of 70 ± 8.6 HU; (b) peak signal-to-noise ratio (PSNR) of 29.4 ± 2.8 dB; structural similarity metric (SSIM) of 0.95 ± 0.02; and (d) Dice coefficient of the body region of 0.984 ± 0.

Significance: We demonstrate that the proposed method generates sCT images that resemble visual characteristics of a real CT image and has a quantitative accuracy that suits RT dose planning application. We compare the dose calculation from the proposed sCT and the real CT in a radiation therapy treatment planning setup and show that sCT based planning falls within 0.5% target dose error. The method presented here with an initial dose evaluation makes an encouraging precursor to a broader clinical evaluation of sCT based RT planning on different anatomical regions.

Place, publisher, year, edition, pages
Institute of Physics (IOP), 2023
Keywords
focused loss, image translation, MRI radiation therapy, multi-task CNN, synthetic CT
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Image Processing
Identifiers
urn:nbn:se:umu:diva-214754 (URN)10.1088/1361-6560/acefa3 (DOI)37567235 (PubMedID)2-s2.0-85171601230 (Scopus ID)
Funder
EU, Horizon 2020
Available from: 2023-10-13 Created: 2023-10-13 Last updated: 2023-10-13Bibliographically approved
Simkó, A., Garpebring, A., Jonsson, J., Nyholm, T. & Löfstedt, T. (2023). Reproducibility of the methods in medical imaging with deep learning. In: : . Paper presented at Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023.
Open this publication in new window or tab >>Reproducibility of the methods in medical imaging with deep learning
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Concerns about the reproducibility of deep learning research are more prominent than ever, with no clear solution in sight. The Medical Imaging with Deep Learning (MIDL) conference has made advancements in employing empirical rigor with regards to reproducibility by advocating open access, and recently also recommending authors to make their code public---both aspects being adopted by the majority of the conference submissions. We have evaluated all accepted full paper submissions to MIDL between 2018 and 2022 using established, but adjusted guidelines addressing the reproducibility and quality of the public repositories. The evaluations show that publishing repositories and using public datasets are becoming more popular, which helps traceability, but the quality of the repositories shows room for improvement in every aspect. Merely 22% of all submissions contain a repository that was deemed repeatable using our evaluations. From the commonly encountered issues during the evaluations, we propose a set of guidelines for machine learning-related research for medical imaging applications, adjusted specifically for future submissions to MIDL. We presented our results to future MIDL authors who were eager to continue an open discussion on the topic of code reproducibility.

Keywords
Reproducibility, Reproducibility of the Methods, Deep Learning, Medical Imaging, Open Science, Transparent Research
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-215692 (URN)
Conference
Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023
Note

Originally included in thesis in manuscript form. 

Available from: 2023-10-25 Created: 2023-10-25 Last updated: 2023-10-26
Simkó, A., Bylund, M., Jönsson, G., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2023). Towards MR contrast independent synthetic CT generation. Zeitschrift für Medizinische Physik
Open this publication in new window or tab >>Towards MR contrast independent synthetic CT generation
Show others...
2023 (English)In: Zeitschrift für Medizinische Physik, ISSN 0939-3889, E-ISSN 1876-4436Article in journal (Refereed) Epub ahead of print
Abstract [en]

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.

To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
MRI contrast, Robust machine learning, Synthetic CT generation
National Category
Computer Sciences Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-214270 (URN)10.1016/j.zemedi.2023.07.001 (DOI)37537099 (PubMedID)2-s2.0-85169824488 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region VästerbottenSwedish National Infrastructure for Computing (SNIC)
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-01-09
Vu, M. H., Norman, G., Nyholm, T. & Löfstedt, T. (2022). A Data-Adaptive Loss Function for Incomplete Data and Incremental Learning in Semantic Image Segmentation. IEEE Transactions on Medical Imaging, 41(6), 1320-1330
Open this publication in new window or tab >>A Data-Adaptive Loss Function for Incomplete Data and Incremental Learning in Semantic Image Segmentation
2022 (English)In: IEEE Transactions on Medical Imaging, ISSN 0278-0062, E-ISSN 1558-254X, Vol. 41, no 6, p. 1320-1330Article in journal (Refereed) Published
Abstract [en]

In the last years, deep learning has dramatically improved the performances in a variety of medical image analysis applications. Among different types of deep learning models, convolutional neural networks have been among the most successful and they have been used in many applications in medical imaging.

Training deep convolutional neural networks often requires large amounts of image data to generalize well to new unseen images. It is often time-consuming and expensive to collect large amounts of data in the medical image domain due to expensive imaging systems, and the need for experts to manually make ground truth annotations. A potential problem arises if new structures are added when a decision support system is already deployed and in use. Since the field of radiation therapy is constantly developing, the new structures would also have to be covered by the decision support system.

In the present work, we propose a novel loss function to solve multiple problems: imbalanced datasets, partially-labeled data, and incremental learning. The proposed loss function adapts to the available data in order to utilize all available data, even when some have missing annotations. We demonstrate that the proposed loss function also works well in an incremental learning setting, where an existing model is easily adapted to semi-automatically incorporate delineations of new organs when they appear. Experiments on a large in-house dataset show that the proposed method performs on par with baseline models, while greatly reducing the training time and eliminating the hassle of maintaining multiple models in practice.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Adaptation models, Computational modeling, CT, Data models, Image segmentation, Incremental Learning, Medical Imaging, Missing Data, Predictive models, Semantic Image Segmentation, Task analysis, Training
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-191280 (URN)10.1109/TMI.2021.3139161 (DOI)000804690300003 ()34965206 (PubMedID)2-s2.0-85122295338 (Scopus ID)
Funder
Swedish Research Council, 2018-05973Cancerforskningsfonden i Norrland, AMP 20-1027Cancerforskningsfonden i Norrland, LP 18-2182Region VästerbottenVinnova
Available from: 2022-01-13 Created: 2022-01-13 Last updated: 2023-01-24Bibliographically approved
Simkó, A., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2022). MRI bias field correction with an implicitly trained CNN. In: Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni (Ed.), Proceedings of the 5th international conference on medical imaging with deep learning: . Paper presented at International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022 (pp. 1125-1138). ML Research Press
Open this publication in new window or tab >>MRI bias field correction with an implicitly trained CNN
Show others...
2022 (English)In: Proceedings of the 5th international conference on medical imaging with deep learning / [ed] Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni, ML Research Press , 2022, p. 1125-1138Conference paper, Published paper (Refereed)
Abstract [en]

In magnetic resonance imaging (MRI), bias fields are difficult to correct since they are inherently unknown. They cause intra-volume intensity inhomogeneities which limit the performance of subsequent automatic medical imaging tasks, \eg, tissue-based segmentation. Since the ground truth is unavailable, training a supervised machine learning solution requires approximating the bias fields, which limits the resulting method. We introduce implicit training which sidesteps the inherent lack of data and allows the training of machine learning solutions without ground truth. We describe how training a model implicitly for bias field correction allows using non-medical data for training, achieving a highly generalized model. The implicit approach was compared to a more traditional training based on medical data. Both models were compared to an optimized N4ITK method, with evaluations on six datasets. The implicitly trained model improved the homogeneity of all encountered medical data, and it generalized better for a range of anatomies, than the model trained traditionally. The model achieves a significant speed-up over an optimized N4ITK method—by a factor of 100, and after training, it also requires no parameters to tune. For tasks such as bias field correction - where ground truth is generally not available, but the characteristics of the corruption are known - implicit training promises to be a fruitful alternative for highly generalized solutions.

Place, publisher, year, edition, pages
ML Research Press, 2022
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 172
Keywords
Self-supervised learning, Implicit Training, Magnetic Resonance Imaging, Bias Field Correction, Image Restoration
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-205226 (URN)2-s2.0-85169103625 (Scopus ID)
Conference
International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022
Available from: 2023-02-27 Created: 2023-02-27 Last updated: 2023-11-10Bibliographically approved
Mehta, R., Filos, A., Baid, U., Sako, C., McKinley, R., Rebsamen, M., . . . Arbel, T. (2022). QU-BraTS: MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results. Journal of Machine Learning for Biomedical Imaging, 1-54, Article ID 026.
Open this publication in new window or tab >>QU-BraTS: MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
Show others...
2022 (English)In: Journal of Machine Learning for Biomedical Imaging, ISSN 2766-905X, p. 1-54, article id 026Article in journal (Refereed) Published
Abstract [en]

Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS

Keywords
Uncertainty Quantification, Trustworthiness, Segmentation, Brain Tumors, Deep Learning, Neuro-Oncology, Glioma, Glioblastoma
National Category
Computer Vision and Robotics (Autonomous Systems) Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:umu:diva-198857 (URN)
Available from: 2022-08-26 Created: 2022-08-26 Last updated: 2023-04-25Bibliographically approved
Zimmermann, L., Buschmann, M., Herrmann, H., Heilemann, G., Kuess, P., Goldner, G., . . . Nesvacil, N. (2021). An MR-only acquisition and artificial intelligence based image-processing protocol for photon and proton therapy using a low field MR. Zeitschrift für Medizinische Physik, 31(1), 78-88
Open this publication in new window or tab >>An MR-only acquisition and artificial intelligence based image-processing protocol for photon and proton therapy using a low field MR
Show others...
2021 (English)In: Zeitschrift für Medizinische Physik, ISSN 0939-3889, E-ISSN 1876-4436, Vol. 31, no 1, p. 78-88Article in journal (Refereed) Published
Abstract [en]

Objective: Recent developments on synthetically generated CTs (sCT), hybrid MRI linacs and MR-only simulations underlined the clinical feasibility and acceptance of MR guided radiation therapy. However, considering clinical application of open and low field MR with a limited field of view can result in truncation of the patient's anatomy which further affects the MR to sCT conversion. In this study an acquisition protocol and subsequent MR image stitching is proposed to overcome the limited field of view restriction of open MR scanners, for MR-only photon and proton therapy.

Material and Methods: 12 prostate cancer patients scanned with an open 0.35T scanner were included. To obtain the full body contour an enhanced imaging protocol including two repeated scans after bilateral table movement was introduced. All required structures (patient contour, target and organ at risk) were delineated on a post-processed combined transversal image set (stitched MRI). The postprocessed MR was converted into a sCT by a pretrained neural network generator. Inversely planned photon and proton plans (VMAT and SFUD) were designed using the sCT and recalculated for rigidly and deformably registered CT images and compared based on D2%, D50%, V70 Gy for organs at risk and based on D2%, D50%, D98% for the CTV and PTV. The stitched MRI and the untruncated MRI were compared to the CT, and the maximum surface distance was calculated. The sCT was evaluated with respect to delineation accuracy by comparing on stitched MRI and sCT using the DICE coefficient for femoral bones and the whole body.

Results: Maximum surface distance analysis revealed uncertainties in lateral direction of 1–3 mm on average. DICE coefficient analysis confirms good performance of the sCT conversion, i.e. 92%, 93%, and 100% were obtained for femoral bone left and right and whole body. Dose comparison resulted in uncertainties below 1% between deformed CT and sCT and below 2% between rigidly registered CT and sCT in the CTV for photon and proton treatment plans.

Discussion: A newly developed acquisition protocol for open MR scanners and subsequent Sct generation revealed good acceptance for photon and proton therapy. Moreover, this protocol tackles the restriction of the limited FOVs and expands the capacities towards MR guided proton therapy with horizontal beam lines.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
MR-only simulation, Open MR scanner, Stitching protocol
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-186303 (URN)10.1016/j.zemedi.2020.10.004 (DOI)000698510300010 ()2-s2.0-85099385099 (Scopus ID)
Available from: 2021-07-21 Created: 2021-07-21 Last updated: 2023-09-05Bibliographically approved
Organisations

Search in DiVA

Show all publications