Adaptive patch-based sparsity estimation for image via MOEA/D

Yu ZHOU, Sam KWONG, Qingfu ZHANG, Mengyuan WU

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

3 Citations (Scopus)


Due to the extensive and various information that natural images contain, it is very challenging to estimate the sparsity for an image. In this paper, we propose an adaptive sparsity estimation model for image patches, which consists of an offline training phase and online estimation phase. In offline training, for the training patch, MOEA/D is applied to obtain a group of Pareto solutions and determine a sparsity range. By processing a reduced number of representative training patches, all the sparsity ranges are stored in a look-up table (LUT) for reuse. In the online estimation phase, for a query patch, its sparsity range is set to that of the most similar training patch. And the corresponding sparse representation vector can be obtained by a sparsity-restricted greedy algorithm (SRGA) constrained by this range. Thus, the sparsity is adaptively determined by this sparse representation vector within this range. By comparing with the state-of-the-art greedy algorithms with fixed sparsity and one adaptive method, experimental studies on benchmark dataset demonstrate that our proposed approach is able to achieve better sparse representation quality in terms of PSNR and coding efficiency.
Original languageEnglish
Title of host publication2016 IEEE Congress on Evolutionary Computation (CEC)
Number of pages8
ISBN (Electronic)9781509006236
ISBN (Print)9781509006243
Publication statusPublished - Jul 2016
Externally publishedYes
Event2016 IEEE Congress on Evolutionary Computation - Vancouver, Canada
Duration: 24 Jul 201629 Jul 2016


Conference2016 IEEE Congress on Evolutionary Computation
Abbreviated titleCEC 2016


  • Knee region detection
  • Multi-objective optimization
  • Sparse coding
  • Sparsity estimation


Dive into the research topics of 'Adaptive patch-based sparsity estimation for image via MOEA/D'. Together they form a unique fingerprint.

Cite this