Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 11) Show all publications
Khalid, N., Koochali, M., Rajashekar, V., Munir, M., Edlund, C., Jackson, T. R., . . . Ahmed, S. (2022). DeepMuCS: A framework for co-culture microscopic image analysis: from generation to segmentation. In: 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI): . Paper presented at 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics, BHI 2022, September 27-30, 2022 (pp. 1-4). IEEE
Open this publication in new window or tab >>DeepMuCS: A framework for co-culture microscopic image analysis: from generation to segmentation
Show others...
2022 (English)In: 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), IEEE, 2022, p. 1-4Conference paper, Published paper (Refereed)
Abstract [en]

Discrimination between cell types in the co-culture environment with multiple cell lines can assist in examining the interaction between different cell populations. Identifying different cell cultures in addition to cell segmentation in co-culture is essential for understanding the cellular mechanisms associated with disease states. In drug development, biologists are more interested in co-culture models because they replicate the tumor environment in vivo better than the monoculture models. Additionally, they have a measurable effect on cancer cell response to treatment. Co-culture models are critical for designing a drug with maximum efficacy on cancer while minimizing harm to the rest of the body. In the past, there existed minimal progress related to cell-type aware segmentation in the monoculture and no development whatsoever for the co-culture. The introduction of the LIVECell dataset has allowed us to perform experiments for cell-type-aware segmentation. However, it is composed of microscopic images in a monoculture environment. This paper presents a framework for co-culture microscopic image data generation, where each image can contain multiple cell cultures. The framework also presents a pipeline for culture-dependent cell segmentation in co-culture microscopic images. The extensive evaluation revealed that it is possible to achieve cell-type aware segmentation in co-culture microscopic images with good precision.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
biomedical, cell segmentation, co-culture, deep learning, healthcare
National Category
Medical Biotechnology (with a focus on Cell Biology (including Stem Cell Biology), Molecular Biology, Microbiology, Biochemistry or Biopharmacy)
Identifiers
urn:nbn:se:umu:diva-201641 (URN)10.1109/BHI56158.2022.9926936 (DOI)000895865900089 ()2-s2.0-85143072914 (Scopus ID)9781665487917 (ISBN)
Conference
2022 IEEE-EMBS International Conference on Biomedical and Health Informatics, BHI 2022, September 27-30, 2022
Available from: 2022-12-13 Created: 2022-12-13 Last updated: 2023-09-05Bibliographically approved
Khalid, N., Schmeisser, F., Koochali, M., Munir, M., Edlund, C., Jackson, T. R., . . . Ahmed, S. (2022). Point2Mask: A Weakly Supervised Approach for Cell Segmentation Using Point Annotation. In: Guang Yang; Angelica Aviles-Rivero; Michael Roberts; Carola-Bibiane Schönlieb (Ed.), Medical image understanding and analysis: 26th annual conference, MIUA 2022, Cambridge, UK, July 27–29, 2022, proceedings. Paper presented at 26th Annual Conference on Medical Image Understanding and Analysis, MIUA 2022, Cambridge, July 27-29, 2022. (pp. 139-153). Springer
Open this publication in new window or tab >>Point2Mask: A Weakly Supervised Approach for Cell Segmentation Using Point Annotation
Show others...
2022 (English)In: Medical image understanding and analysis: 26th annual conference, MIUA 2022, Cambridge, UK, July 27–29, 2022, proceedings / [ed] Guang Yang; Angelica Aviles-Rivero; Michael Roberts; Carola-Bibiane Schönlieb, Springer, 2022, p. 139-153Conference paper, Published paper (Refereed)
Abstract [en]

Identifying cells in microscopic images is a crucial step toward studying image-based cell biology research. Cell instance segmentation provides an opportunity to study the shape, structure, form, and size of cells. Deep learning approaches for cell instance segmentation rely on the instance segmentation mask for each cell, which is a labor-intensive and expensive task. An ample amount of unlabeled microscopic data is available in the cell biology domain, but due to the tedious and exorbitant nature of the annotations needed for the cell instance segmentation approaches, the full potential of the data is not explored. This paper presents a weakly supervised approach, which can perform cell instance segmentation by using only point and bounding box-based annotation. This enormously reduces the annotation efforts. The proposed approach is evaluated on a benchmark dataset i.e., LIVECell, whereby only using a bounding box and randomly generated points on each cell, it achieved the mean average precision score of 43.53% which is as good as the full supervised segmentation method trained with complete segmentation mask. In addition, it is 3.71 times faster to annotate with a bounding box and point in comparison to full mask annotation.

Place, publisher, year, edition, pages
Springer, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13413
Keywords
Cell segmentation, Deep learning, Point annotation, Weakly supervised
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-198916 (URN)10.1007/978-3-031-12053-4_11 (DOI)000883331000011 ()2-s2.0-85135942969 (Scopus ID)978-3-031-12052-7 (ISBN)978-3-031-12053-4 (ISBN)
Conference
26th Annual Conference on Medical Image Understanding and Analysis, MIUA 2022, Cambridge, July 27-29, 2022.
Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2023-09-05Bibliographically approved
Khalid, N., Munir, M., Edlund, C., Jackson, T. R., Trygg, J., Sjögren, R., . . . Ahmed, S. (2021). DeepCeNS: An end-to-end Pipeline for Cell and Nucleus Segmentation in Microscopic Images. In: Proceedings of the International Joint Conference on Neural Networks: . Paper presented at 2021 International Joint Conference on Neural Networks, IJCNN 2021, Virtual, Shenzhen, China, 18-22 July, 2021. IEEE
Open this publication in new window or tab >>DeepCeNS: An end-to-end Pipeline for Cell and Nucleus Segmentation in Microscopic Images
Show others...
2021 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

With the evolution of deep learning in the past decade, more biomedical related problems that seemed strenuous, are now feasible. The introduction of U-net and Mask R-CNN architectures has paved a way for many object detection and segmentation tasks in numerous applications ranging from security to biomedical applications. In the cell biology domain, light microscopy imaging provides a cheap and accessible source of raw data to study biological phenomena. By leveraging such data and deep learning techniques, human diseases can be easily diagnosed and the process of treatment development can be greatly expedited. In microscopic imaging, accurate segmentation of individual cells is a crucial step to allow better insight into cellular heterogeneity. To address the aforementioned challenges, DeepCeNS is proposed in this paper to detect and segment cells and nucleus in microscopic images. We have used EVICAN2 dataset which contains microscopic images from a variety of microscopes having numerous cell cultures, to evaluate the proposed pipeline. DeepCeNS outperforms EVICAN-MRCNN by a significant margin on the EVICAN2 dataset.

Place, publisher, year, edition, pages
IEEE, 2021
Series
Proceedings of International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407 ; 2021 July
Keywords
biomedical, cell segmentation, deep learning, healthcare, nucleus segmentation
National Category
Computer graphics and computer vision Medical Imaging
Identifiers
urn:nbn:se:umu:diva-188631 (URN)10.1109/IJCNN52387.2021.9533624 (DOI)000722581702085 ()2-s2.0-85116427196 (Scopus ID)9780738133669 (ISBN)9781665439008 (ISBN)9781665445979 (ISBN)
Conference
2021 International Joint Conference on Neural Networks, IJCNN 2021, Virtual, Shenzhen, China, 18-22 July, 2021
Available from: 2021-10-18 Created: 2021-10-18 Last updated: 2025-02-09Bibliographically approved
Khalid, N., Munir, M., Edlund, C., Jackson, T. R., Trygg, J., Sjögren, R., . . . Ahmed, S. (2021). DeepCIS: An end-to-end Pipeline for Cell-type aware Instance Segmentation in Microscopic Images. In: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, Proceedings: . Paper presented at 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2021, Athens, Greece, 27-30 July, 2021.. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>DeepCIS: An end-to-end Pipeline for Cell-type aware Instance Segmentation in Microscopic Images
Show others...
2021 (English)In: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2021Conference paper, Published paper (Refereed)
Abstract [en]

Accurate cell segmentation in microscopic images is a useful tool to analyze individual cell behavior, which helps to diagnose human diseases and development of new treatments. Cell segmentation of individual cells in a microscopic image with many cells in view allows quantification of single cellular features, such as shape or movement patterns, providing rich insight into cellular heterogeneity. Most of the cell segmentation algorithms up till now focus on segmenting cells in the images without classifying the culture of the cell in the images. Discrimination among cell types in microscopic images can lead to a new era of high-throughput cell microscopy. Multiple cell types in co-culture can be easily identified and studying the changes in cell morphology can lead to many applications such as drug treatment. To address this gap, DeepCIS is proposed to detect, segment, and classify the culture of the cells and nucleus in the microscopic images. We have used the EVICAN60 dataset which contains microscopic images from a variety of microscopes having numerous cell cultures, to evaluate the proposed pipeline. To further demonstrate the utility of the DeepCIS, we have designed various experimental settings to uncover its learning potential. We have achieved a mean average precision score of 24.37% for the segmentation task averaged over 30 classes for cell and nucleus.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Biomedical, Cell-type classification, Cell-type segmentation, Deep learning, Healthcare, Nucleus-type segmentation
National Category
Medical Imaging
Identifiers
urn:nbn:se:umu:diva-193019 (URN)10.1109/BHI50953.2021.9508480 (DOI)2-s2.0-85125471267 (Scopus ID)9781665403580 (ISBN)
Conference
2021 IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2021, Athens, Greece, 27-30 July, 2021.
Available from: 2022-03-15 Created: 2022-03-15 Last updated: 2025-02-09Bibliographically approved
Edlund, C., Jackson, T. R., Khalid, N., Bevan, N., Dale, T., Dengel, A., . . . Sjögren, R. (2021). LIVECell: a large-scale dataset for label-free live cell segmentation. Nature Methods, 18(9), 1038-1045
Open this publication in new window or tab >>LIVECell: a large-scale dataset for label-free live cell segmentation
Show others...
2021 (English)In: Nature Methods, ISSN 1548-7091, E-ISSN 1548-7105, Vol. 18, no 9, p. 1038-1045Article in journal (Other academic) Published
Abstract [en]

Light microscopy combined with well-established protocols of two-dimensional cell culture facilitates high-throughput quantitative imaging to study biological phenomena. Accurate segmentation of individual cells in images enables exploration of complex biological questions, but can require sophisticated imaging processing pipelines in cases of low contrast and high object density. Deep learning-based methods are considered state-of-the-art for image segmentation but typically require vast amounts of annotated data, for which there is no suitable resource available in the field of label-free cellular imaging. Here, we present LIVECell, a large, high-quality, manually annotated and expert-validated dataset of phase-contrast images, consisting of over 1.6 million cells from a diverse set of cell morphologies and culture densities. To further demonstrate its use, we train convolutional neural network-based models using LIVECell and evaluate model segmentation accuracy with a proposed a suite of benchmarks.

Place, publisher, year, edition, pages
Nature Publishing Group, 2021
National Category
Medical Imaging
Identifiers
urn:nbn:se:umu:diva-182681 (URN)10.1038/s41592-021-01249-6 (DOI)000691220800001 ()34462594 (PubMedID)2-s2.0-85113983609 (Scopus ID)
Note

Previously included in thesis in manuscript form. 

Available from: 2021-05-03 Created: 2021-05-03 Last updated: 2025-02-09Bibliographically approved
Sjögren, R. & Trygg, J. (2021). Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding.
Open this publication in new window or tab >>Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding
2021 (English)Manuscript (preprint) (Other academic)
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-182680 (URN)
Available from: 2021-05-03 Created: 2021-05-03 Last updated: 2021-05-04
Sjögren, R. (2021). Synergies between Chemometrics and Machine Learning. (Doctoral dissertation). Umeå: Umeå Universitet
Open this publication in new window or tab >>Synergies between Chemometrics and Machine Learning
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Synergier mellan kemometri och maskininlärning
Abstract [en]

Thanks to digitization and automation, data in all shapes and forms are generated in ever-growing quantities throughout society, industry and science. Data-driven methods, such as machine learning algorithms, are already widely used to benefit from all these data in all kinds of applications, ranging from text suggestion in smartphones to process monitoring in industry. To ensure maximal benefit to society, we need workflows to generate, analyze and model data that are performant as well as robust and trustworthy.

There are several scientific disciplines aiming to develop data-driven methodologies, two of which are machine learning and chemometrics. Machine learning is part of artificial intelligence and develops algorithms that learn from data. Chemometrics, on the other hand, is a subfield of chemistry aiming to generate and analyze complex chemical data in an optimal manner. There is already a certain overlap between the two fields where machine learning algorithms are used for predictive modelling within chemometrics. Although, since both fields aims to increase value of data and have disparate backgrounds, there are plenty of possible synergies to benefit both fields. Thanks to its wide applicability, there are many tools and lessons learned within machine learning that goes beyond the predictive models that are used within chemometrics today. On the other hand, chemometrics has always been application-oriented and this pragmatism has made it widely used for quality assurance within regulated industries. 

This thesis serves to nuance the relationship between the two fields and show that knowledge in either field can be used to benefit the other. We explore how tools widely used in applied machine learning can help chemometrics break new ground in a case study of text analysis of patents in Paper I. We then draw inspiration from chemometrics and show how principles of experimental design can help us optimize large-scale data processing pipelines in Paper II and how a method common in chemometrics can be adapted to allow artificial neural networks detect outlier observations in Paper III. We then show how experimental design principles can be used to ensure quality in the core of concurrent machine learning, namely generation of large-scale datasets in Paper IV. Lastly, we outline directions for future research and how state-of-the-art research in machine learning can benefit chemometric method development.

Abstract [sv]

Tack vare digitalisering och automation genereras växande mängder data i alla möjliga former runtom i samhället, industrin och akademin. För att utnyttja dessa data på  bästa vis används redan idag så kallade datadrivna metoder, till exempel maskininlärning, i mängder av tillämpningar i allt ifrån förslag av nästa ord i SMS på smartphones till process-övervakning inom industri. För att maximera samhällsnyttan av den data som genereras behöver vi robusta och pålitliga arbetsflöden för att skapa, analysera och modellera data för alla tänkbara tillämpningar.

Det finns många vetenskapliga fält som utvecklar och utnyttjar datadrivna metoder, där två av dessa är maskininlärning och kemometri. Maskininlärning faller inom det som kallas artificiell intelligens och utvecklar algoritmer som lär sig från data. Kemometri däremot har sitt ursprung i kemi och utvecklar metoder för att generera, analysera och maximera värdet av komplexa kemiska data. Det finns ett visst överlapp mellan fälten där maskininlärnings-algoritmer används flitigt för prediktiv modellering inom kemometrin. Eftersom bägge fält försöker öka värdet av data och har vitt skilda bakgrunder finns det många potentiella synergieffekter. Tack vare att maskininlärning är så vida använt finns det många verktyg och lärdomar utöver dom prediktiva modeller som redan används inom kemometrin. Å andra sidan har kemometri alltid varit inriktat på praktisk tillämpning och har tack vare sin pragmatism lett till att det idag är välanvänt för kvalitetsarbete inom reglerad industri. 

Den här avhandlingen har som mål att nyansera förhållandet mellan kemometri och maskin-inlärning och visa att lärdomar inom vardera fält kan gynna det andra. Vi visar på hur man kan använda verktyg vanliga inom maskininlärning för att hjälpa kemometrin att bryta ny mark i en case-studie på textanalys av patentsamlingar i Paper I. Sedan lånar vi från kemometrin och visar hur vi kan använda experimentdesign för att optimera storskaliga beräkningsflöden i Paper II och hur en vanlig metod inom kemometrin kan formuleras om för att för att upptäcka avvikande mätningar i artificiella neuronnät i Paper III. Efter det visar vi hur principer från experimentdesign kan användas för att säkerställa kvalitet i kärnan av modern maskininlärning, nämligen skapandet av stora dataset i Paper IV. Till sist ger vi förslag på framtida forskning och hur de senaste metoderna inom maskin-inlärning kan gynna metodutveckling inom kemometrin.

Place, publisher, year, edition, pages
Umeå: Umeå Universitet, 2021. p. 50
Keywords
computational science, machine learning, chemometrics, multivariate data analysis, design of experiments, data science, beräkningsvetenskap, maskininlärning, kemometri, multivariat dataanalys, experimentdesign
National Category
Other Chemistry Topics Bioinformatics and Computational Biology Computer Sciences
Identifiers
urn:nbn:se:umu:diva-182683 (URN)978-91-7855-558-1 (ISBN)978-91-7855-559-8 (ISBN)
Public defence
2021-05-28, KBC Glasburen, KBC - building, Umeå, 09:00 (English)
Opponent
Supervisors
Funder
eSSENCE - An eScience CollaborationThe Swedish Foundation for International Cooperation in Research and Higher Education (STINT)Swedish Research Council, 2016‐04376
Available from: 2021-05-07 Created: 2021-05-03 Last updated: 2025-02-05Bibliographically approved
Sjögren, R., Stridh, K., Skotare, T. & Trygg, J. (2020). Multivariate patent analysis: using chemometrics to analyze collections of chemical and pharmaceutical patents. Journal of Chemometrics, 34(1), Article ID e3041.
Open this publication in new window or tab >>Multivariate patent analysis: using chemometrics to analyze collections of chemical and pharmaceutical patents
2020 (English)In: Journal of Chemometrics, ISSN 0886-9383, E-ISSN 1099-128X, Vol. 34, no 1, article id e3041Article in journal (Refereed) Published
Abstract [en]

Patents are an important source of technological knowledge, but the amount of existing patents is vast and quickly growing. This makes development of tools and methodologies for quickly revealing patterns in patent collections important. In this paper, we describe how structured chemometric principles of multivariate data analysis can be applied in the context of text analysis in a novel combination with common machine learning preprocessing methodologies. We demonstrate our methodology in 2 case studies. Using principal component analysis (PCA) on a collection of 12338 patent abstracts from 25 companies in big pharma revealed sub-fields which the companies are active in. Using PCA on a smaller collection of patents retrieved by searching for a specific term proved useful to quickly understand how patent classifications relate to the search term. By using orthogonal projections to latent structures (O-PLS) on patent classification schemes, we were able to separate patents on a more detailed level than using PCA. Lastly, we performed multi-block modeling using OnPLS on bag-of-words representations of abstracts, claims, and detailed descriptions, respectively, showing that semantic variation relating to patent classification is consistent across multiple text blocks, represented as globally joint variation. We conclude that using machine learning to transform unstructured data into structured data provide a good preprocessing tool for subsequent chemometric multivariate data analysis and provides an easily interpretable and novel workflow to understand large collections of patents. We demonstrate this on collections of chemical and pharmaceutical patents.

Place, publisher, year, edition, pages
John Wiley & Sons, 2020
Keywords
text analytics, OnPLS, principal component analysis, orthogonal projections to latent structures, feature engineering
National Category
Other Chemistry Topics
Identifiers
urn:nbn:se:umu:diva-152511 (URN)10.1002/cem.3041 (DOI)000509318600011 ()2-s2.0-85046797919 (Scopus ID)
Funder
Swedish Research Council, 2016‐04376eSSENCE - An eScience CollaborationThe Swedish Foundation for International Cooperation in Research and Higher Education (STINT)
Available from: 2018-10-09 Created: 2018-10-09 Last updated: 2023-03-24Bibliographically approved
Skotare, T., Sjögren, R., Surowiec, I., Nilsson, D. & Trygg, J. (2020). Visualization of descriptive multiblock analysis. Journal of Chemometrics, 34(1), Article ID e3071.
Open this publication in new window or tab >>Visualization of descriptive multiblock analysis
Show others...
2020 (English)In: Journal of Chemometrics, ISSN 0886-9383, E-ISSN 1099-128X, Vol. 34, no 1, article id e3071Article in journal (Refereed) Published
Abstract [en]

Understanding and making the most of complex data collected from multiple sources is a challenging task. Data integration is the procedure of describing the main features in multiple data blocks, and several methods for multiblock analysis have been previously developed, including OnPLS and JIVE. One of the main challenges is how to visualize and interpret the results of multiblock analyses because of the increased model complexity and sheer size of data. In this paper, we present novel visualization tools that simplify interpretation and overview of multiblock analysis. We introduce a correlation matrix plot that provides an overview of the relationships between blocks found by multiblock models. We also present a multiblock scatter plot, a metadata correlation plot, and a variation distribution plot, that simplify the interpretation of multiblock models. We demonstrate our visualizations on an industrial case study in vibration spectroscopy (NIR, UV, and Raman datasets) as well as a multiomics integration study (transcript, metabolite, and protein datasets). We conclude that our visualizations provide useful tools to harness the complexity of multiblock analysis and enable better understanding of the investigated system.

Place, publisher, year, edition, pages
John Wiley & Sons, 2020
Keywords
data fusion, descriptive analytics, multiblock analysis, OnPLS, visualization
National Category
Other Chemistry Topics
Identifiers
urn:nbn:se:umu:diva-152512 (URN)10.1002/cem.3071 (DOI)000509318600006 ()2-s2.0-85051048496 (Scopus ID)
Funder
eSSENCE - An eScience CollaborationSwedish Research Council, 2016‐04376
Available from: 2018-10-09 Created: 2018-10-09 Last updated: 2020-03-12Bibliographically approved
Rentoft, M., Svensson, D., Sjödin, A., Olason, P. I., Sjöström, O., Nylander, C., . . . Johansson, E. (2019). A geographically matched control population efficiently limits the number of candidate disease-causing variants in an unbiased whole-genome analysis. PLOS ONE, 14(3), Article ID e0213350.
Open this publication in new window or tab >>A geographically matched control population efficiently limits the number of candidate disease-causing variants in an unbiased whole-genome analysis
Show others...
2019 (English)In: PLOS ONE, E-ISSN 1932-6203, Vol. 14, no 3, article id e0213350Article in journal (Refereed) Published
Abstract [en]

Whole-genome sequencing is a promising approach for human autosomal dominant disease studies. However, the vast number of genetic variants observed by this method constitutes a challenge when trying to identify the causal variants. This is often handled by restricting disease studies to the most damaging variants, e.g. those found in coding regions, and overlooking the remaining genetic variation. Such a biased approach explains in part why the genetic causes of many families with dominantly inherited diseases, in spite of being included in whole-genome sequencing studies, are left unsolved today. Here we explore the use of a geographically matched control population to minimize the number of candidate disease-causing variants without excluding variants based on assumptions on genomic position or functional predictions. To exemplify the benefit of the geographically matched control population we apply a typical disease variant filtering strategy in a family with an autosomal dominant form of colorectal cancer. With the use of the geographically matched control population we end up with 26 candidate variants genome wide. This is in contrast to the tens of thousands of candidates left when only making use of available public variant datasets. The effect of the local control population is dual, it (1) reduces the total number of candidate variants shared between affected individuals, and more importantly (2) increases the rate by which the number of candidate variants are reduced as additional affected family members are included in the filtering strategy. We demonstrate that the application of a geographically matched control population effectively limits the number of candidate disease-causing variants and may provide the means by which variants suitable for functional studies are identified genome wide.

Place, publisher, year, edition, pages
Public Library of Science, 2019
National Category
Medical Genetics and Genomics
Identifiers
urn:nbn:se:umu:diva-158021 (URN)10.1371/journal.pone.0213350 (DOI)000462465800028 ()30917156 (PubMedID)2-s2.0-85063572524 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation, 2011.0042
Available from: 2019-04-10 Created: 2019-04-10 Last updated: 2025-02-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7881-0968

Search in DiVA

Show all publications