A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding

Xiaohao CAI*, Raymond CHAN, Tieyong ZENG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

169 Citations (Scopus)

Abstract

The Mumford-Shah model is one of the most important image segmentation models and has been studied extensively in the last twenty years. In this paper, we propose a two-stage segmentation method based on the Mumford-Shah model. The first stage of our method is to find a smooth solution g to a convex variant of the Mumford-Shah model. Once g is obtained, then in the second stage the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by techniques like the split-Bregman algorithm or the Chambolle-Pock method. We prove that our method is convergent and that the solution g is always unique. In our method, there is no need to specify the number of segments K (K ≥ 2) before finding g. We can obtain any K-phase segmentations by choosing (K - 1) thresholds after g is found in the first stage, and in the second stage there is no need to recompute g if the thresholds are changed to reveal different segmentation features in the image. Experimental results show that our two-stage method performs better than many standard two-phase or multiphase segmentation methods for very general images, including antimass, tubular, MRI, noisy, and blurry images.

Original languageEnglish
Pages (from-to)368-390
Number of pages23
JournalSIAM Journal on Imaging Sciences
Volume6
Issue number1
DOIs
Publication statusPublished - Jan 2013
Externally publishedYes

Keywords

  • Image segmentation
  • Mumford-Shah model
  • Split-Bregman
  • Total variation

Fingerprint

Dive into the research topics of 'A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding'. Together they form a unique fingerprint.

Cite this