Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Accepted author manuscript, 6.8 MB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Salient Object Detection Via Two-Stage Graphs
AU - Liu, Yi
AU - Han, Jungong
AU - Zhang, Qiang
AU - Wang, Long
N1 - ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PY - 2019/4/1
Y1 - 2019/4/1
N2 - Despite recent advances made in salient object detection using graph theory, the approach still suffers from accuracy problems when the image is characterized by a complex structure, either in the foreground or background, causing erroneous saliency segmentation. This fundamental challenge is mainly attributed to the fact that most of existing graph-based methods take only the adjacently spatial consistency among graph nodes into consideration. In this paper, we tackle this issue from a coarse-to-fine perspective and propose a two-stage-graphs approach for salient object detection, in which two graphs having the same nodes but different edges are employed. Specifically, a weighted joint robust sparse representation model, rather than the commonly used manifold ranking model, helps to compute the saliency value of each node in the first-stage graph, thereby providing a saliency map at the coarse level. In the second-stage graph, along with the adjacently spatial consistency, a new regionally spatial consistency among graph nodes is considered in order to refine the coarse saliency map, assuring uniform saliency assignment even in complex scenes. Particularly, the second stage is generic enough to be integrated in existing salient object detectors, enabling to improve their performance. Experimental results on benchmark datasets validate the effectiveness and superiority of the proposed scheme over related state-of-the-art methods.
AB - Despite recent advances made in salient object detection using graph theory, the approach still suffers from accuracy problems when the image is characterized by a complex structure, either in the foreground or background, causing erroneous saliency segmentation. This fundamental challenge is mainly attributed to the fact that most of existing graph-based methods take only the adjacently spatial consistency among graph nodes into consideration. In this paper, we tackle this issue from a coarse-to-fine perspective and propose a two-stage-graphs approach for salient object detection, in which two graphs having the same nodes but different edges are employed. Specifically, a weighted joint robust sparse representation model, rather than the commonly used manifold ranking model, helps to compute the saliency value of each node in the first-stage graph, thereby providing a saliency map at the coarse level. In the second-stage graph, along with the adjacently spatial consistency, a new regionally spatial consistency among graph nodes is considered in order to refine the coarse saliency map, assuring uniform saliency assignment even in complex scenes. Particularly, the second stage is generic enough to be integrated in existing salient object detectors, enabling to improve their performance. Experimental results on benchmark datasets validate the effectiveness and superiority of the proposed scheme over related state-of-the-art methods.
U2 - 10.1109/TCSVT.2018.2823769
DO - 10.1109/TCSVT.2018.2823769
M3 - Journal article
VL - 29
SP - 1023
EP - 1037
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
SN - 1051-8215
IS - 4
ER -