Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bounding box is all you need: learning to segment cells in 2D microscopic images via box annotations
German Research Center for Artificial Intelligence (DFKI) GmbH, Kaiserslautern, Germany; RPTU Kaiserslautern–Landau, Kaiserslautern, Germany.
Sartorius, Digital Solutions, Royston, United Kingdom.
Sartorius, BioAnalytics, Royston, United Kingdom.
Sartorius, BioAnalytis, Ann Arbor, United States.
Show others and affiliations
2024 (English)In: Medical image understanding and analysis: 28th annual conference, MIUA 2024, Manchester, UK, July 24–26, 2024, proceedings, part I / [ed] Moi Hoon Yap; Connah Kendrick; Ardhendu Behera; Timothy Cootes; Reyer Zwiggelaar, Cham: Springer, 2024, p. 314-328Conference paper, Published paper (Refereed)
Abstract [en]

Microscopic imaging plays a pivotal role in various fields of science and medicine, offering invaluable insights into the intricate world of cellular biology. At the heart of this endeavor lies the need for accurate identification and characterization of individual cells within these images. Deep learning-based cell segmentation, which involves delineating cells from complex microscopic images, is pivotal for cell analysis. It serves as the foundation for extracting meaningful information about cell morphology, spatial organization, and interactions. However, traditional deep-learning models for cell segmentation require extensive and expensive annotation masks for each cell in the image, posing a significant challenge. To address this issue, this study introduces CellBoxify, a novel pipeline that streamlines cell instance segmentation. Unlike traditional methods, CellBoxify operates solely on bounding box annotations, making it approximately seven times faster than manual segmentation mask annotation for each cell. The proposed approach’s effectiveness is evident in its performance on the LIVECell dataset, a well-known resource for cell segmentation research. Achieving 83.40% of the fully supervised performance on this dataset demonstrates the efficacy of the proposed method.

Place, publisher, year, edition, pages
Cham: Springer, 2024. p. 314-328
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 14859
Keywords [en]
bounding box annotations, cell segmentation, deep learning, medical imaging, weakly supervised
National Category
Computer graphics and computer vision Medical Imaging
Identifiers
URN: urn:nbn:se:umu:diva-228484DOI: 10.1007/978-3-031-66955-2_22Scopus ID: 2-s2.0-85200686935ISBN: 9783031669545 (print)ISBN: 9783031669552 (electronic)OAI: oai:DiVA.org:umu-228484DiVA, id: diva2:1889254
Conference
28th Annual Conference on Medical Image Understanding and Analysis, MIUA 2024, Manchester, UK, July 24-26, 2024
Available from: 2024-08-15 Created: 2024-08-15 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Trygg, Johan

Search in DiVA

By author/editor
Trygg, Johan
By organisation
Department of Chemistry
Computer graphics and computer visionMedical Imaging

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 108 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf