Home > Research > Publications & Outputs > Salient object detection based on super-pixel c...

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Salient object detection based on super-pixel clustering and unified low-rank representation

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Salient object detection based on super-pixel clustering and unified low-rank representation. / Zhang, Qiang; Liu, Yi; Liu, Siyang et al.
In: Computer Vision and Image Understanding, Vol. 161, 08.2017, p. 51-64.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Zhang, Q, Liu, Y, Liu, S & Han, J 2017, 'Salient object detection based on super-pixel clustering and unified low-rank representation', Computer Vision and Image Understanding, vol. 161, pp. 51-64. https://doi.org/10.1016/j.cviu.2017.04.015

APA

Vancouver

Zhang Q, Liu Y, Liu S, Han J. Salient object detection based on super-pixel clustering and unified low-rank representation. Computer Vision and Image Understanding. 2017 Aug;161:51-64. Epub 2017 May 1. doi: 10.1016/j.cviu.2017.04.015

Author

Zhang, Qiang ; Liu, Yi ; Liu, Siyang et al. / Salient object detection based on super-pixel clustering and unified low-rank representation. In: Computer Vision and Image Understanding. 2017 ; Vol. 161. pp. 51-64.

Bibtex

@article{db833c9ceac740b38508f027c3464cfb,
title = "Salient object detection based on super-pixel clustering and unified low-rank representation",
abstract = "In this paper, we present a novel salient object detection method, efficiently combining Laplacian sparse subspace clustering (LSSC) and unified low-rank representation (ULRR). Unlike traditional low-rank matrix recovery (LRMR) based saliency detection methods which mainly extract saliency from pixels or super-pixels, our method advocates the saliency detection on the super-pixel clusters generated by LSSC. By doing so, our method succeeds in extracting large-size salient objects from cluttered backgrounds, against the detection of small-size salient objects from simple backgrounds obtained by most existing work. The entire algorithm is carried out in two stages: region clustering and cluster saliency detection. In the first stage, the input image is segmented into many super-pixels, and on top of it, they are further grouped into different clusters by using LSSC. Each cluster contains multiple super-pixels having similar features (e.g., colors and intensities), and may correspond to a part of a salient object in the foreground or a local region in the background. In the second stage, we formulate the saliency detection of each super-pixel cluster as a unified low-rankness and sparsity pursuit problem using a ULRR model, which integrates a Laplacian regularization term with respect to the sparse error matrix into the traditional low-rank representation (LRR) model. The whole model is based on a sensible cluster-consistency assumption that the spatially adjacent super-pixels within the same cluster should have similar saliency values, similar representation coefficients as well as similar reconstruction errors. In addition, we construct a primitive dictionary for the ULRR model in terms of the local-global color contrast of each super-pixel. On top of it, a global saliency measure covering the representation coefficients and a local saliency measure considering the sparse reconstruction errors are jointly employed to define the final saliency measure. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate the validity of the proposed method.",
keywords = "Salient object detection, Laplacian sparse subspace clustering, Unified low-rank representation, Primitive saliency dictionary construction, Super-pixel cluster",
author = "Qiang Zhang and Yi Liu and Siyang Liu and Jungong Han",
year = "2017",
month = aug,
doi = "10.1016/j.cviu.2017.04.015",
language = "English",
volume = "161",
pages = "51--64",
journal = "Computer Vision and Image Understanding",
issn = "1077-3142",
publisher = "Academic Press Inc.",

}

RIS

TY - JOUR

T1 - Salient object detection based on super-pixel clustering and unified low-rank representation

AU - Zhang, Qiang

AU - Liu, Yi

AU - Liu, Siyang

AU - Han, Jungong

PY - 2017/8

Y1 - 2017/8

N2 - In this paper, we present a novel salient object detection method, efficiently combining Laplacian sparse subspace clustering (LSSC) and unified low-rank representation (ULRR). Unlike traditional low-rank matrix recovery (LRMR) based saliency detection methods which mainly extract saliency from pixels or super-pixels, our method advocates the saliency detection on the super-pixel clusters generated by LSSC. By doing so, our method succeeds in extracting large-size salient objects from cluttered backgrounds, against the detection of small-size salient objects from simple backgrounds obtained by most existing work. The entire algorithm is carried out in two stages: region clustering and cluster saliency detection. In the first stage, the input image is segmented into many super-pixels, and on top of it, they are further grouped into different clusters by using LSSC. Each cluster contains multiple super-pixels having similar features (e.g., colors and intensities), and may correspond to a part of a salient object in the foreground or a local region in the background. In the second stage, we formulate the saliency detection of each super-pixel cluster as a unified low-rankness and sparsity pursuit problem using a ULRR model, which integrates a Laplacian regularization term with respect to the sparse error matrix into the traditional low-rank representation (LRR) model. The whole model is based on a sensible cluster-consistency assumption that the spatially adjacent super-pixels within the same cluster should have similar saliency values, similar representation coefficients as well as similar reconstruction errors. In addition, we construct a primitive dictionary for the ULRR model in terms of the local-global color contrast of each super-pixel. On top of it, a global saliency measure covering the representation coefficients and a local saliency measure considering the sparse reconstruction errors are jointly employed to define the final saliency measure. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate the validity of the proposed method.

AB - In this paper, we present a novel salient object detection method, efficiently combining Laplacian sparse subspace clustering (LSSC) and unified low-rank representation (ULRR). Unlike traditional low-rank matrix recovery (LRMR) based saliency detection methods which mainly extract saliency from pixels or super-pixels, our method advocates the saliency detection on the super-pixel clusters generated by LSSC. By doing so, our method succeeds in extracting large-size salient objects from cluttered backgrounds, against the detection of small-size salient objects from simple backgrounds obtained by most existing work. The entire algorithm is carried out in two stages: region clustering and cluster saliency detection. In the first stage, the input image is segmented into many super-pixels, and on top of it, they are further grouped into different clusters by using LSSC. Each cluster contains multiple super-pixels having similar features (e.g., colors and intensities), and may correspond to a part of a salient object in the foreground or a local region in the background. In the second stage, we formulate the saliency detection of each super-pixel cluster as a unified low-rankness and sparsity pursuit problem using a ULRR model, which integrates a Laplacian regularization term with respect to the sparse error matrix into the traditional low-rank representation (LRR) model. The whole model is based on a sensible cluster-consistency assumption that the spatially adjacent super-pixels within the same cluster should have similar saliency values, similar representation coefficients as well as similar reconstruction errors. In addition, we construct a primitive dictionary for the ULRR model in terms of the local-global color contrast of each super-pixel. On top of it, a global saliency measure covering the representation coefficients and a local saliency measure considering the sparse reconstruction errors are jointly employed to define the final saliency measure. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate the validity of the proposed method.

KW - Salient object detection

KW - Laplacian sparse subspace clustering

KW - Unified low-rank representation

KW - Primitive saliency dictionary construction

KW - Super-pixel cluster

U2 - 10.1016/j.cviu.2017.04.015

DO - 10.1016/j.cviu.2017.04.015

M3 - Journal article

VL - 161

SP - 51

EP - 64

JO - Computer Vision and Image Understanding

JF - Computer Vision and Image Understanding

SN - 1077-3142

ER -