Skip to main navigation Skip to search Skip to main content

Interactive prostate MR image segmentation based on ConvLSTMs and GGNN

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Accurate segmentation of the prostate on magnetic resonance (MR) images plays an important role for prostate cancer diagnosis and treatment. Although many automated prostate segmentation methods have been proposed, the performance still faces several challenges, which includes large variability in prostate shape, unclear boundary, and complex intensity distribution. Therefore, the results obtained from the automated methods should be further refined by users to get a more accurate and reliable segmentation. In this paper, we propose an end-to-end interactive segmentation method to refine the automated results. A convolutional long short term memory (convLSTM) module and a gated graph neural network (GGNN) are presented in the proposed method for prostate segmentation in both automated and interactive manners. A boundary loss is proposed to train our model. We evaluated the proposed method on two public available datasets and one in–house dataset. Experimental results show that the proposed convLSTM module could obtain a DSC of 91.78% on the test dataset, which outperforms eight state-of-the-art methods. A further 1.5% improvements can be obtained by user interactions based on the GGNN. The segmentation time including user interactions and inference time was 2.3 min on average for segmenting one volume.

Original languageEnglish
Pages (from-to)84-93
Number of pages10
JournalNeurocomputing
Volume438
DOIs
StatePublished - 28 May 2021

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Gated graph neural network
  • Long short term memory
  • Medical image segmentation
  • User interaction

Fingerprint

Dive into the research topics of 'Interactive prostate MR image segmentation based on ConvLSTMs and GGNN'. Together they form a unique fingerprint.

Cite this