Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 48) Show all publications
Hellström, M., Löfstedt, T. & Garpebring, A. (2023). Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors. Magnetic Resonance in Medicine, 90(6), 2557-2571
Open this publication in new window or tab >>Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors
2023 (English)In: Magnetic Resonance in Medicine, ISSN 0740-3194, E-ISSN 1522-2594, Vol. 90, no 6, p. 2557-2571Article in journal (Refereed) Published
Abstract [en]

Purpose: To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors.

Methods: We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping.

Results: We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior.

Conclusion: DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
deep image prior, denoising, parameter mapping, quantitative MRI, uncertainty estimation
National Category
Medical Image Processing Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-213711 (URN)10.1002/mrm.29823 (DOI)001049833500001 ()37582257 (PubMedID)2-s2.0-85168117341 (Scopus ID)
Funder
Swedish Research Council, 2019‐0432Region Västerbotten, RV‐970119Cancerforskningsfonden i Norrland, AMP 18‐912
Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2023-12-20Bibliographically approved
Simkó, A., Ruiter, S., Löfstedt, T., Garpebring, A., Nyholm, T., Bylund, M. & Jonsson, J. (2023). Improving MR image quality with a multi-task model, using convolutional losses. BMC Medical Imaging, 23(1), Article ID 148.
Open this publication in new window or tab >>Improving MR image quality with a multi-task model, using convolutional losses
Show others...
2023 (English)In: BMC Medical Imaging, ISSN 1471-2342, E-ISSN 1471-2342, Vol. 23, no 1, article id 148Article in journal (Refereed) Published
Abstract [en]

PURPOSE: During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored.

METHODS: In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test.

RESULTS: Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality.

CONCLUSION: We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.

Place, publisher, year, edition, pages
BioMed Central (BMC), 2023
Keywords
Image artefact correction, Machine learning, Magnetic resonance imaging
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-215277 (URN)10.1186/s12880-023-01109-z (DOI)37784039 (PubMedID)2-s2.0-85173046817 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region Västerbotten
Available from: 2023-10-17 Created: 2023-10-17 Last updated: 2023-10-25Bibliographically approved
Simkó, A., Garpebring, A., Jonsson, J., Nyholm, T. & Löfstedt, T. (2023). Reproducibility of the methods in medical imaging with deep learning. In: : . Paper presented at Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023.
Open this publication in new window or tab >>Reproducibility of the methods in medical imaging with deep learning
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Concerns about the reproducibility of deep learning research are more prominent than ever, with no clear solution in sight. The Medical Imaging with Deep Learning (MIDL) conference has made advancements in employing empirical rigor with regards to reproducibility by advocating open access, and recently also recommending authors to make their code public---both aspects being adopted by the majority of the conference submissions. We have evaluated all accepted full paper submissions to MIDL between 2018 and 2022 using established, but adjusted guidelines addressing the reproducibility and quality of the public repositories. The evaluations show that publishing repositories and using public datasets are becoming more popular, which helps traceability, but the quality of the repositories shows room for improvement in every aspect. Merely 22% of all submissions contain a repository that was deemed repeatable using our evaluations. From the commonly encountered issues during the evaluations, we propose a set of guidelines for machine learning-related research for medical imaging applications, adjusted specifically for future submissions to MIDL. We presented our results to future MIDL authors who were eager to continue an open discussion on the topic of code reproducibility.

Keywords
Reproducibility, Reproducibility of the Methods, Deep Learning, Medical Imaging, Open Science, Transparent Research
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-215692 (URN)
Conference
Medical Imaging with Deep Learning 2023, MIDL, Nashville, July 10-12, 2023
Note

Originally included in thesis in manuscript form. 

Available from: 2023-10-25 Created: 2023-10-25 Last updated: 2023-10-26
Meyers, C., Löfstedt, T. & Elmroth, E. (2023). Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems. Artificial Intelligence Review, 56, 217-251
Open this publication in new window or tab >>Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
2023 (English)In: Artificial Intelligence Review, ISSN 0269-2821, E-ISSN 1573-7462, Vol. 56, p. 217-251Article in journal (Refereed) Published
Abstract [en]

Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Adversarial machine learning, Computer vision, Autonomous vehicles, Safety-critical
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-211212 (URN)10.1007/s10462-023-10521-4 (DOI)001014695900002 ()2-s2.0-85162639161 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation, 2019.0352
Available from: 2023-06-29 Created: 2023-06-29 Last updated: 2024-01-08Bibliographically approved
Simkó, A., Bylund, M., Jönsson, G., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2023). Towards MR contrast independent synthetic CT generation. Zeitschrift für Medizinische Physik
Open this publication in new window or tab >>Towards MR contrast independent synthetic CT generation
Show others...
2023 (English)In: Zeitschrift für Medizinische Physik, ISSN 0939-3889, E-ISSN 1876-4436Article in journal (Refereed) Epub ahead of print
Abstract [en]

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.

To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
MRI contrast, Robust machine learning, Synthetic CT generation
National Category
Computer Sciences Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:umu:diva-214270 (URN)10.1016/j.zemedi.2023.07.001 (DOI)37537099 (PubMedID)2-s2.0-85169824488 (Scopus ID)
Funder
Cancerforskningsfonden i Norrland, LP 18-2182Cancerforskningsfonden i Norrland, AMP 18-912Cancerforskningsfonden i Norrland, AMP 20-1014Cancerforskningsfonden i Norrland, LP 22-2319Region VästerbottenSwedish National Infrastructure for Computing (SNIC)
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-01-09
Chandra, A., Tünnermann, L., Löfstedt, T. & Gratz, R. (2023). Transformer-based deep learning for predicting protein properties in the life sciences. eLIFE, 12, Article ID e82819.
Open this publication in new window or tab >>Transformer-based deep learning for predicting protein properties in the life sciences
2023 (English)In: eLIFE, E-ISSN 2050-084X, Vol. 12, article id e82819Article in journal (Refereed) Published
Abstract [en]

Recent developments in deep learning, coupled with an increasing number of sequenced proteins, have led to a breakthrough in life science applications, in particular in protein property prediction. There is hope that deep learning can close the gap between the number of sequenced proteins and proteins with known properties based on lab experiments. Language models from the field of natural language processing have gained popularity for protein property predictions and have led to a new computational revolution in biology, where old prediction results are being improved regularly. Such models can learn useful multipurpose representations of proteins from large open repositories of protein sequences and can be used, for instance, to predict protein properties. The field of natural language processing is growing quickly because of developments in a class of models based on a particular model - the Transformer model. We review recent developments and the use of large-scale Transformer models in applications for predicting protein characteristics and how such models can be used to predict, for example, post-translational modifications. We review shortcomings of other deep learning models and explain how the Transformer models have quickly proven to be a very promising way to unravel information hidden in the sequences of amino acids.

Place, publisher, year, edition, pages
eLife Sciences Publications, 2023
National Category
Computer and Information Sciences Other Agricultural Sciences
Identifiers
urn:nbn:se:umu:diva-203829 (URN)10.7554/elife.82819 (DOI)000915592600001 ()36651724 (PubMedID)2-s2.0-85146532067 (Scopus ID)
Funder
The Kempe Foundations, JCK-2144The Kempe Foundations, JCK-2015.1
Available from: 2023-01-20 Created: 2023-01-20 Last updated: 2023-09-05Bibliographically approved
Vu, M. H., Norman, G., Nyholm, T. & Löfstedt, T. (2022). A Data-Adaptive Loss Function for Incomplete Data and Incremental Learning in Semantic Image Segmentation. IEEE Transactions on Medical Imaging, 41(6), 1320-1330
Open this publication in new window or tab >>A Data-Adaptive Loss Function for Incomplete Data and Incremental Learning in Semantic Image Segmentation
2022 (English)In: IEEE Transactions on Medical Imaging, ISSN 0278-0062, E-ISSN 1558-254X, Vol. 41, no 6, p. 1320-1330Article in journal (Refereed) Published
Abstract [en]

In the last years, deep learning has dramatically improved the performances in a variety of medical image analysis applications. Among different types of deep learning models, convolutional neural networks have been among the most successful and they have been used in many applications in medical imaging.

Training deep convolutional neural networks often requires large amounts of image data to generalize well to new unseen images. It is often time-consuming and expensive to collect large amounts of data in the medical image domain due to expensive imaging systems, and the need for experts to manually make ground truth annotations. A potential problem arises if new structures are added when a decision support system is already deployed and in use. Since the field of radiation therapy is constantly developing, the new structures would also have to be covered by the decision support system.

In the present work, we propose a novel loss function to solve multiple problems: imbalanced datasets, partially-labeled data, and incremental learning. The proposed loss function adapts to the available data in order to utilize all available data, even when some have missing annotations. We demonstrate that the proposed loss function also works well in an incremental learning setting, where an existing model is easily adapted to semi-automatically incorporate delineations of new organs when they appear. Experiments on a large in-house dataset show that the proposed method performs on par with baseline models, while greatly reducing the training time and eliminating the hassle of maintaining multiple models in practice.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Adaptation models, Computational modeling, CT, Data models, Image segmentation, Incremental Learning, Medical Imaging, Missing Data, Predictive models, Semantic Image Segmentation, Task analysis, Training
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-191280 (URN)10.1109/TMI.2021.3139161 (DOI)000804690300003 ()34965206 (PubMedID)2-s2.0-85122295338 (Scopus ID)
Funder
Swedish Research Council, 2018-05973Cancerforskningsfonden i Norrland, AMP 20-1027Cancerforskningsfonden i Norrland, LP 18-2182Region VästerbottenVinnova
Available from: 2022-01-13 Created: 2022-01-13 Last updated: 2023-01-24Bibliographically approved
Simkó, A., Löfstedt, T., Garpebring, A., Nyholm, T. & Jonsson, J. (2022). MRI bias field correction with an implicitly trained CNN. In: Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni (Ed.), Proceedings of the 5th international conference on medical imaging with deep learning: . Paper presented at International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022 (pp. 1125-1138). ML Research Press
Open this publication in new window or tab >>MRI bias field correction with an implicitly trained CNN
Show others...
2022 (English)In: Proceedings of the 5th international conference on medical imaging with deep learning / [ed] Ender Konukoglu; Bjoern Menze; Archana Venkataraman; Christian Baumgartner; Qi Dou; Shadi Albarqouni, ML Research Press , 2022, p. 1125-1138Conference paper, Published paper (Refereed)
Abstract [en]

In magnetic resonance imaging (MRI), bias fields are difficult to correct since they are inherently unknown. They cause intra-volume intensity inhomogeneities which limit the performance of subsequent automatic medical imaging tasks, \eg, tissue-based segmentation. Since the ground truth is unavailable, training a supervised machine learning solution requires approximating the bias fields, which limits the resulting method. We introduce implicit training which sidesteps the inherent lack of data and allows the training of machine learning solutions without ground truth. We describe how training a model implicitly for bias field correction allows using non-medical data for training, achieving a highly generalized model. The implicit approach was compared to a more traditional training based on medical data. Both models were compared to an optimized N4ITK method, with evaluations on six datasets. The implicitly trained model improved the homogeneity of all encountered medical data, and it generalized better for a range of anatomies, than the model trained traditionally. The model achieves a significant speed-up over an optimized N4ITK method—by a factor of 100, and after training, it also requires no parameters to tune. For tasks such as bias field correction - where ground truth is generally not available, but the characteristics of the corruption are known - implicit training promises to be a fruitful alternative for highly generalized solutions.

Place, publisher, year, edition, pages
ML Research Press, 2022
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 172
Keywords
Self-supervised learning, Implicit Training, Magnetic Resonance Imaging, Bias Field Correction, Image Restoration
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-205226 (URN)2-s2.0-85169103625 (Scopus ID)
Conference
International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, July 6-8, 2022
Available from: 2023-02-27 Created: 2023-02-27 Last updated: 2023-11-10Bibliographically approved
Mehta, R., Filos, A., Baid, U., Sako, C., McKinley, R., Rebsamen, M., . . . Arbel, T. (2022). QU-BraTS: MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results. Journal of Machine Learning for Biomedical Imaging, 1-54, Article ID 026.
Open this publication in new window or tab >>QU-BraTS: MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
Show others...
2022 (English)In: Journal of Machine Learning for Biomedical Imaging, ISSN 2766-905X, p. 1-54, article id 026Article in journal (Refereed) Published
Abstract [en]

Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS

Keywords
Uncertainty Quantification, Trustworthiness, Segmentation, Brain Tumors, Deep Learning, Neuro-Oncology, Glioma, Glioblastoma
National Category
Computer Vision and Robotics (Autonomous Systems) Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:umu:diva-198857 (URN)
Available from: 2022-08-26 Created: 2022-08-26 Last updated: 2023-04-25Bibliographically approved
Simkó, A., Löfstedt, T., Garpebring, A., Bylund, M., Nyholm, T. & Jonsson, J. (2021). Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning. In: Mattias Heinrich; Qi Dou; Marleen de Bruijne; Jan Lellmann; Alexander Schläfer; Floris Ernst (Ed.), Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR: . Paper presented at Medical Imaging with Deep Learning (MIDL), Online, 7-9 July, 2021. (pp. 713-727). Lübeck University; Hamburg University of Technology, 143
Open this publication in new window or tab >>Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning
Show others...
2021 (English)In: Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR / [ed] Mattias Heinrich; Qi Dou; Marleen de Bruijne; Jan Lellmann; Alexander Schläfer; Floris Ernst, Lübeck University; Hamburg University of Technology , 2021, Vol. 143, p. 713-727Conference paper, Published paper (Refereed)
Abstract [en]

 The contrast settings to select before acquiring magnetic resonance imaging (MRI) signal depend heavily on the subsequent tasks. As each contrast highlights different tissues, automated segmentation tools for example might be optimized for a certain contrast. While for radiotherapy, multiple scans of the same region with different contrasts can achieve a better accuracy for delineating tumours and organs at risk. Unfortunately, the optimal contrast for the subsequent automated methods might not be known during the time of signal acquisition, and performing multiple scans with different contrasts increases the total examination time and registering the sequences introduces extra work and potential errors. Building on the recent achievements of deep learning in medical applications, the presented work describes a novel approach for transferring any contrast to any other. The novel model architecture incorporates the signal equation for spin echo sequences, and hence the model inherently learns the unknown quantitative maps for proton density, 𝑇1 and 𝑇2 relaxation times (𝑃𝐷, 𝑇1 and 𝑇2, respectively). This grants the model the ability to retrospectively reconstruct spin echo sequences by changing the contrast settings Echo and Repetition Time (𝑇𝐸 and 𝑇𝑅, respectively). The model learns to identify the contrast of pelvic MR images, therefore no paired data of the same anatomy from different contrasts is required for training. This means that the experiments are easily reproducible with other contrasts or other patient anatomies. Despite the contrast of the input image, the model achieves accurate results for reconstructing signal with contrasts available for evaluation. For the same anatomy, the quantitative maps are consistent for a range of contrasts of input images. Realized in practice, the proposed method would greatly simplify the modern radiotherapy pipeline. The trained model is made public together with a tool for testing the model on example images. 

Place, publisher, year, edition, pages
Lübeck University; Hamburg University of Technology, 2021
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-190497 (URN)2-s2.0-85162848187 (Scopus ID)
Conference
Medical Imaging with Deep Learning (MIDL), Online, 7-9 July, 2021.
Available from: 2021-12-16 Created: 2021-12-16 Last updated: 2023-10-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7119-7646

Search in DiVA

Show all publications