Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Jonsson, Joakim
Alternative names
Publications (10 of 37) Show all publications
Sandgren, K., Strandberg, S., Jonsson, J., Grefve, J., Keeratijarut Lindberg, A., Nilsson, E., . . . Riklund, K. (2023). Histopathology-validated lesion detection rates of clinically significant prostate cancer with mpMRI, [68Ga]PSMA-11-PET and [11C]Acetate-PET. Nuclear medicine communications, 44(11), 997-1004
Open this publication in new window or tab >>Histopathology-validated lesion detection rates of clinically significant prostate cancer with mpMRI, [68Ga]PSMA-11-PET and [11C]Acetate-PET
Show others...
2023 (English)In: Nuclear medicine communications, ISSN 0143-3636, E-ISSN 1473-5628, Vol. 44, no 11, p. 997-1004Article in journal (Refereed) Published
Abstract [en]

Objective: PET/CT and multiparametric MRI (mpMRI) are important diagnostic tools in clinically significant prostate cancer (csPC). The aim of this study was to compare csPC detection rates with [68Ga]PSMA-11-PET (PSMA)-PET, [11C] Acetate (ACE)-PET, and mpMRI with histopathology as reference, to identify the most suitable imaging modalities for subsequent hybrid imaging. An additional aim was to compare inter-reader variability to assess reproducibility.

Methods: During 2016–2019, all study participants were examined with PSMA-PET/mpMRI and ACE-PET/CT prior to radical prostatectomy. PSMA-PET, ACE-PET and mpMRI were evaluated separately by two observers, and were compared with histopathology-defined csPC. Statistical analyses included two-sided McNemar test and index of specific agreement.

Results: Fifty-five study participants were included, with 130 histopathological intraprostatic lesions >0.05 cc. Of these, 32% (42/130) were classified as csPC with ISUP grade ≥2 and volume >0.5 cc. PSMA-PET and mpMRI showed no difference in performance (P = 0.48), with mean csPC detection rate of 70% (29.5/42) and 74% (31/42), respectively, while with ACE-PET the mean csPC detection rate was 37% (15.5/42). Interobserver agreement was higher with PSMA-PET compared to mpMRI [79% (26/33) vs 67% (24/38)]. Including all detected lesions from each pair of observers, the detection rate increased to 90% (38/42) with mpMRI, and 79% (33/42) with PSMA-PET.

Conclusion: PSMA-PET and mpMRI showed high csPC detection rates and superior performance compared to ACE-PET. The interobserver agreement indicates higher reproducibility with PSMA-PET. The combined result of all observers in both PSMA-PET and mpMRI showed the highest detection rate, suggesting an added value of a hybrid imaging approach.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2023
Keywords
acetate-PET, detection rate, intraprostatic lesion, multiparametric MRI, prostate cancer, PSMA-PET
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-216125 (URN)10.1097/MNM.0000000000001743 (DOI)001083841200009 ()37615497 (PubMedID)2-s2.0-85174936230 (Scopus ID)
Funder
Swedish Cancer SocietyVästerbotten County Council
Available from: 2023-11-06 Created: 2023-11-06 Last updated: 2023-11-06Bibliographically approved
Björeland, U., Notstam, K., Fransson, P., Söderkvist, K., Beckman, L., Jonsson, J., . . . Thellenberg-Karlsson, C. (2023). Hyaluronic acid spacer in prostate cancer radiotherapy: dosimetric effects, spacer stability and long-term toxicity and PRO in a phase II study. Radiation Oncology, 18(1), Article ID 1.
Open this publication in new window or tab >>Hyaluronic acid spacer in prostate cancer radiotherapy: dosimetric effects, spacer stability and long-term toxicity and PRO in a phase II study
Show others...
2023 (English)In: Radiation Oncology, ISSN 1748-717X, E-ISSN 1748-717X, Vol. 18, no 1, article id 1Article in journal (Refereed) Published
Abstract [en]

BACKGROUND: Perirectal spacers may be beneficial to reduce rectal side effects from radiotherapy (RT). Here, we present the impact of a hyaluronic acid (HA) perirectal spacer on rectal dose as well as spacer stability, long-term gastrointestinal (GI) and genitourinary (GU) toxicity and patient-reported outcome (PRO).

METHODS: In this phase II study 81 patients with low- and intermediate-risk prostate cancer received transrectal injections with HA before external beam RT (78 Gy in 39 fractions). The HA spacer was evaluated with MRI four times; before (MR0) and after HA-injection (MR1), at the middle (MR2) and at the end (MR3) of RT. GI and GU toxicity was assessed by physician for up to five years according to the RTOG scale. PROs were collected using the Swedish National Prostate Cancer Registry and Prostate cancer symptom scale questionnaires.

RESULTS: There was a significant reduction in rectal V70% (54.6 Gy) and V90% (70.2 Gy) between MR0 and MR1, as well as between MR0 to MR2 and MR3. From MR1 to MR2/MR3, HA thickness decreased with 28%/32% and CTV-rectum space with 19%/17% in the middle level. The cumulative late grade ≥ 2 GI toxicity at 5 years was 5% and the proportion of PRO moderate or severe overall bowel problems at 5 years follow-up was 12%. Cumulative late grade ≥ 2 GU toxicity at 5 years was 12% and moderate or severe overall urinary problems at 5 years were 10%.

CONCLUSION: We show that the HA spacer reduced rectal dose and long-term toxicity.

Place, publisher, year, edition, pages
BioMed Central (BMC), 2023
Keywords
Hyaluronic Acid, Prostate cancer, Radiotherapy, Rectal toxicity
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:umu:diva-203799 (URN)10.1186/s13014-022-02197-x (DOI)000906713000001 ()36593460 (PubMedID)2-s2.0-85145492354 (Scopus ID)
Funder
Region VästernorrlandCancerforskningsfonden i NorrlandVisare Norr
Available from: 2023-01-20 Created: 2023-01-20 Last updated: 2023-09-05Bibliographically approved
Simkó, A., Ruiter, S., Löfstedt, T., Garpebring, A., Nyholm, T., Bylund, M. & Jonsson, J. (2023). Improving MR image quality with a multi-task model, using convolutional losses. BMC Medical Imaging, 23(1), Article ID 148.
Open this publication in new window or tab >>Improving MR image quality with a multi-task model, using convolutional losses
Show others...
2023 (English)In: BMC Medical Imaging, ISSN 1471-2342, E-ISSN 1471-2342, Vol. 23, no 1, article id 148Article in journal (Refereed) Published
Abstract [en]

PURPOSE: During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored.

METHODS: In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test.

RESULTS: Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality.

CONCLUSION: We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.

Place, publisher, year, edition, pages
BioMed Central (BMC), 2023
Keywords
Image artefact correction, Machine learning, Magnetic resonance imaging
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-215277 (URN)10.1186/s12880-023-01109-z (DOI)37784039 (PubMedID)2-s2.0-85173046817 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region Västerbotten
Available from: 2023-10-17 Created: 2023-10-17 Last updated: 2023-10-25Bibliographically approved
Kaushik, S. S., Bylund, M., Cozzini, C., Shanbhag, D., Petit, S. F., Wyatt, J. J., . . . Menze, B. (2023). Region of interest focused MRI to synthetic CT translation using regression and segmentation multi-task network. Physics in Medicine and Biology, 68(19), Article ID 195003.
Open this publication in new window or tab >>Region of interest focused MRI to synthetic CT translation using regression and segmentation multi-task network
Show others...
2023 (English)In: Physics in Medicine and Biology, ISSN 0031-9155, E-ISSN 1361-6560, Vol. 68, no 19, article id 195003Article in journal (Refereed) Published
Abstract [en]

Objective: In MR-only clinical workflow, replacing CT with MR image is of advantage for workflow efficiency and reduces radiation to the patient. An important step required to eliminate CT scan from the workflow is to generate the information provided by CT via an MR image. In this work, we aim to demonstrate a method to generate accurate synthetic CT (sCT) from an MR image to suit the radiation therapy (RT) treatment planning workflow. We show the feasibility of the method and make way for a broader clinical evaluation.

Approach: We present a machine learning method for sCT generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. The misestimation of bone density in the radiation path could lead to unintended dose delivery to the target volume and results in suboptimal treatment outcome. We propose a loss function that favors a spatially sparse bone region in the image. We harness the ability of the multi-task network to produce correlated outputs as a framework to enable localization of region of interest (RoI) via segmentation, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task.

Main results: We have included 54 brain patient images in this study and tested the sCT images against reference CT on a subset of 20 cases. A pilot dose evaluation was performed on 9 of the 20 test cases to demonstrate the viability of the generated sCT in RT planning. The average quantitative metrics produced by the proposed method over the test set were-(a) mean absolute error (MAE) of 70 ± 8.6 HU; (b) peak signal-to-noise ratio (PSNR) of 29.4 ± 2.8 dB; structural similarity metric (SSIM) of 0.95 ± 0.02; and (d) Dice coefficient of the body region of 0.984 ± 0.

Significance: We demonstrate that the proposed method generates sCT images that resemble visual characteristics of a real CT image and has a quantitative accuracy that suits RT dose planning application. We compare the dose calculation from the proposed sCT and the real CT in a radiation therapy treatment planning setup and show that sCT based planning falls within 0.5% target dose error. The method presented here with an initial dose evaluation makes an encouraging precursor to a broader clinical evaluation of sCT based RT planning on different anatomical regions.

Place, publisher, year, edition, pages
Institute of Physics (IOP), 2023
Keywords
focused loss, image translation, MRI radiation therapy, multi-task CNN, synthetic CT
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Image Processing
Identifiers
urn:nbn:se:umu:diva-214754 (URN)10.1088/1361-6560/acefa3 (DOI)37567235 (PubMedID)2-s2.0-85171601230 (Scopus ID)
Funder
EU, Horizon 2020
Available from: 2023-10-13 Created: 2023-10-13 Last updated: 2023-10-13Bibliographically approved
Simkó, A., Garpebring, A., Jonsson, J., Nyholm, T. & Löfstedt, T. (2023). Reproducibility of the methods in medical imaging with deep learning. In: : . Paper presented at Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023.
Open this publication in new window or tab >>Reproducibility of the methods in medical imaging with deep learning
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Concerns about the reproducibility of deep learning research are more prominent than ever, with no clear solution in sight. The Medical Imaging with Deep Learning (MIDL) conference has made advancements in employing empirical rigor with regards to reproducibility by advocating open access, and recently also recommending authors to make their code public---both aspects being adopted by the majority of the conference submissions. We have evaluated all accepted full paper submissions to MIDL between 2018 and 2022 using established, but adjusted guidelines addressing the reproducibility and quality of the public repositories. The evaluations show that publishing repositories and using public datasets are becoming more popular, which helps traceability, but the quality of the repositories shows room for improvement in every aspect. Merely 22% of all submissions contain a repository that was deemed repeatable using our evaluations. From the commonly encountered issues during the evaluations, we propose a set of guidelines for machine learning-related research for medical imaging applications, adjusted specifically for future submissions to MIDL. We presented our results to future MIDL authors who were eager to continue an open discussion on the topic of code reproducibility.

Keywords
Reproducibility, Reproducibility of the Methods, Deep Learning, Medical Imaging, Open Science, Transparent Research
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-215692 (URN)
Conference
Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023
Note

Originally included in thesis in manuscript form. 

Available from: 2023-10-25 Created: 2023-10-25 Last updated: 2023-10-26
Simkó, A., Bylund, M., Jönsson, G., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2023). Towards MR contrast independent synthetic CT generation. Zeitschrift für Medizinische Physik
Open this publication in new window or tab >>Towards MR contrast independent synthetic CT generation
Show others...
2023 (English)In: Zeitschrift für Medizinische Physik, ISSN 0939-3889, E-ISSN 1876-4436Article in journal (Refereed) Epub ahead of print
Abstract [en]

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.

To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
MRI contrast, Robust machine learning, Synthetic CT generation
National Category
Computer Sciences Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-214270 (URN)10.1016/j.zemedi.2023.07.001 (DOI)37537099 (PubMedID)2-s2.0-85169824488 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region VästerbottenSwedish National Infrastructure for Computing (SNIC)
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-01-09
Simkó, A., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2022). MRI bias field correction with an implicitly trained CNN. In: Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni (Ed.), Proceedings of the 5th international conference on medical imaging with deep learning: . Paper presented at International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022 (pp. 1125-1138). ML Research Press
Open this publication in new window or tab >>MRI bias field correction with an implicitly trained CNN
Show others...
2022 (English)In: Proceedings of the 5th international conference on medical imaging with deep learning / [ed] Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni, ML Research Press , 2022, p. 1125-1138Conference paper, Published paper (Refereed)
Abstract [en]

In magnetic resonance imaging (MRI), bias fields are difficult to correct since they are inherently unknown. They cause intra-volume intensity inhomogeneities which limit the performance of subsequent automatic medical imaging tasks, \eg, tissue-based segmentation. Since the ground truth is unavailable, training a supervised machine learning solution requires approximating the bias fields, which limits the resulting method. We introduce implicit training which sidesteps the inherent lack of data and allows the training of machine learning solutions without ground truth. We describe how training a model implicitly for bias field correction allows using non-medical data for training, achieving a highly generalized model. The implicit approach was compared to a more traditional training based on medical data. Both models were compared to an optimized N4ITK method, with evaluations on six datasets. The implicitly trained model improved the homogeneity of all encountered medical data, and it generalized better for a range of anatomies, than the model trained traditionally. The model achieves a significant speed-up over an optimized N4ITK method—by a factor of 100, and after training, it also requires no parameters to tune. For tasks such as bias field correction - where ground truth is generally not available, but the characteristics of the corruption are known - implicit training promises to be a fruitful alternative for highly generalized solutions.

Place, publisher, year, edition, pages
ML Research Press, 2022
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 172
Keywords
Self-supervised learning, Implicit Training, Magnetic Resonance Imaging, Bias Field Correction, Image Restoration
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-205226 (URN)2-s2.0-85169103625 (Scopus ID)
Conference
International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022
Available from: 2023-02-27 Created: 2023-02-27 Last updated: 2023-11-10Bibliographically approved
Simkó, A., Löfstedt, T., Garpebring, A., Bylund, M., Nyholm, T. & Jonsson, J. (2021). Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning. In: Mattias Heinrich; Qi Dou; Marleen de Bruijne; Jan Lellmann; Alexander Schläfer; Floris Ernst (Ed.), Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR: . Paper presented at Medical Imaging with Deep Learning (MIDL), Online, 7-9 July, 2021. (pp. 713-727). Lübeck University; Hamburg University of Technology, 143
Open this publication in new window or tab >>Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning
Show others...
2021 (English)In: Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR / [ed] Mattias Heinrich; Qi Dou; Marleen de Bruijne; Jan Lellmann; Alexander Schläfer; Floris Ernst, Lübeck University; Hamburg University of Technology , 2021, Vol. 143, p. 713-727Conference paper, Published paper (Refereed)
Abstract [en]

 The contrast settings to select before acquiring magnetic resonance imaging (MRI) signal depend heavily on the subsequent tasks. As each contrast highlights different tissues, automated segmentation tools for example might be optimized for a certain contrast. While for radiotherapy, multiple scans of the same region with different contrasts can achieve a better accuracy for delineating tumours and organs at risk. Unfortunately, the optimal contrast for the subsequent automated methods might not be known during the time of signal acquisition, and performing multiple scans with different contrasts increases the total examination time and registering the sequences introduces extra work and potential errors. Building on the recent achievements of deep learning in medical applications, the presented work describes a novel approach for transferring any contrast to any other. The novel model architecture incorporates the signal equation for spin echo sequences, and hence the model inherently learns the unknown quantitative maps for proton density, 𝑇1 and 𝑇2 relaxation times (𝑃𝐷, 𝑇1 and 𝑇2, respectively). This grants the model the ability to retrospectively reconstruct spin echo sequences by changing the contrast settings Echo and Repetition Time (𝑇𝐸 and 𝑇𝑅, respectively). The model learns to identify the contrast of pelvic MR images, therefore no paired data of the same anatomy from different contrasts is required for training. This means that the experiments are easily reproducible with other contrasts or other patient anatomies. Despite the contrast of the input image, the model achieves accurate results for reconstructing signal with contrasts available for evaluation. For the same anatomy, the quantitative maps are consistent for a range of contrasts of input images. Realized in practice, the proposed method would greatly simplify the modern radiotherapy pipeline. The trained model is made public together with a tool for testing the model on example images. 

Place, publisher, year, edition, pages
Lübeck University; Hamburg University of Technology, 2021
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-190497 (URN)2-s2.0-85162848187 (Scopus ID)
Conference
Medical Imaging with Deep Learning (MIDL), Online, 7-9 July, 2021.
Available from: 2021-12-16 Created: 2021-12-16 Last updated: 2023-10-25Bibliographically approved
Björeland, U., Nyholm, T., Jonsson, J., Skorpil, M., Blomqvist, L., Strandberg, S., . . . Thellenberg-Karlsson, C. (2021). Impact of neoadjuvant androgen deprivation therapy on magnetic resonance imaging features in prostate cancer before radiotherapy. Physics and Imaging in Radiation Oncology, 17, 117-123
Open this publication in new window or tab >>Impact of neoadjuvant androgen deprivation therapy on magnetic resonance imaging features in prostate cancer before radiotherapy
Show others...
2021 (English)In: Physics and Imaging in Radiation Oncology, E-ISSN 2405-6316, Vol. 17, p. 117-123Article in journal (Refereed) Published
Abstract [en]

Background and purpose: In locally advanced prostate cancer (PC), androgen deprivation therapy (ADT) in combination with whole prostate radiotherapy (RT) is the standard treatment. ADT affects the prostate as well as the tumour on multiparametric magnetic resonance imaging (MRI) with decreased PC conspicuity and impaired localisation of the prostate lesion. Image texture analysis has been suggested to be of aid in separating tumour from normal tissue. The aim of the study was to investigate the impact of ADT on baseline defined MRI features in prostate cancer with the goal to investigate if it might be of use in radiotherapy planning.

Materials and methods: Fifty PC patients were included. Multiparametric MRI was performed before, and three months after ADT. At baseline, a tumour volume was delineated on apparent diffusion coefficient (ADC) maps with suspected tumour content and a reference volume in normal prostatic tissue. These volumes were transferred to MRIs after ADT and were analysed with first-order -and invariant Haralick -features.

Results: At baseline, the median value and several of the invariant Haralick features of ADC, showed a significant difference between tumour and reference volumes. After ADT, only ADC median value could significantly differentiate the two volumes.

Conclusions: Invariant Haralick -features could not distinguish between baseline MRI defined PC and normal tissue after ADT. First-order median value remained significantly different in tumour and reference volumes after ADT, but the difference was less pronounced than before ADT.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Androgen deprivation, GLCM, mpMRI, Prostate, Texture
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-181012 (URN)10.1016/j.phro.2021.01.004 (DOI)000645143900021 ()2-s2.0-85101352503 (Scopus ID)
Available from: 2021-03-05 Created: 2021-03-05 Last updated: 2023-09-05Bibliographically approved
Sandgren, K., Nilsson, E., Keeratijarut Lindberg, A., Strandberg, S., Blomqvist, L., Bergh, A., . . . Nyholm, T. (2021). Registration of histopathology to magnetic resonance imaging of prostate cancer. Physics and Imaging in Radiation Oncology, 18, 19-25
Open this publication in new window or tab >>Registration of histopathology to magnetic resonance imaging of prostate cancer
Show others...
2021 (English)In: Physics and Imaging in Radiation Oncology, E-ISSN 2405-6316, Vol. 18, p. 19-25Article in journal (Refereed) Published
Abstract [en]

Background and purpose: The diagnostic accuracy of new imaging techniques requires validation, preferably by histopathological verification. The aim of this study was to develop and present a registration procedure between histopathology and in-vivo magnetic resonance imaging (MRI) of the prostate, to estimate its uncertainty and to evaluate the benefit of adding a contour-correcting registration.

Materials and methods: For twenty-five prostate cancer patients, planned for radical prostatectomy, a 3D-printed prostate mold based on in-vivo MRI was created and an ex-vivo MRI of the specimen, placed inside the mold, was performed. Each histopathology slice was registered to its corresponding ex-vivo MRI slice using a 2D-affine registration. The ex-vivo MRI was rigidly registered to the in-vivo MRI and the resulting transform was applied to the histopathology stack. A 2D deformable registration was used to correct for specimen distortion concerning the specimen's fit inside the mold. We estimated the spatial uncertainty by comparing positions of landmarks in the in-vivo MRI and the corresponding registered histopathology stack.

Results: Eighty-four landmarks were identified, located in the urethra (62%), prostatic cysts (33%), and the ejaculatory ducts (5%). The median number of landmarks was 3 per patient. We showed a median in-plane error of 1.8 mm before and 1.7 mm after the contour-correcting deformable registration. In patients with extraprostatic margins, the median in-plane error improved from 2.1 mm to 1.8 mm after the contour-correcting deformable registration.

Conclusions: Our registration procedure accurately registers histopathology to in-vivo MRI, with low uncertainty. The contour-correcting registration was beneficial in patients with extraprostatic surgical margins.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Histopathology correlation, Image registration, PET/MRI, Prostate cancer
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-182584 (URN)10.1016/j.phro.2021.03.004 (DOI)000662270600004 ()2-s2.0-85104070374 (Scopus ID)
Available from: 2021-04-29 Created: 2021-04-29 Last updated: 2023-09-05Bibliographically approved
Organisations

Search in DiVA

Show all publications