Interactive object extraction by merging regions with k-global maximal similarity

Taiyong Li, Zhilong Xie, Jiang Wu, Jingwen Yan, Li Shen

Research output: Contribution to journalArticle

6 Scopus citations

Abstract

Object extraction from still images is an important task in pattern recognition and computer vision. It is very hard to find a fully automatic object extraction method in practical applications. Therefore, a good solution is to interactively extract objects from complex background with a few simple user inputs. This paper presents an interactive object extraction method from still images. A still image is divided into many small regions by some low level segmentation methods at first. Strokes (or markers), which could be as simple as a few points, are manually input by the user to indicate the initial three types of regions, i.e., object regions, background regions and unmarked regions. A region is merged with its adjacent region if (1) both of the regions are of the same type or (2) either of them is an unmarked region and the similarity between them is among the k-global maximal ones. The region merging process completes when each initial unmarked region is identified as either object or background and the similarity between any two regions equals zero. The extensive experiments demonstrate the efficiency and effectiveness of the proposed method.

Original languageEnglish (US)
Pages (from-to)610-623
Number of pages14
JournalNeurocomputing
Volume120
DOIs
StatePublished - Nov 23 2013

Keywords

  • Image segmentation
  • Interactive object extraction
  • Region merging
  • Similarity

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Cognitive Neuroscience

Fingerprint Dive into the research topics of 'Interactive object extraction by merging regions with k-global maximal similarity'. Together they form a unique fingerprint.

  • Cite this