高级搜索

基于加权的K近邻线性混合显著性目标检测

李炜 李全龙 刘政怡

引用本文: 李炜, 李全龙, 刘政怡. 基于加权的K近邻线性混合显著性目标检测[J]. 电子与信息学报, 2019, 41(10): 2442-2449. doi: 10.11999/JEIT190093 shu
Citation:  Wei LI, Quanlong LI, Zhengyi LIU. Salient Object Detection Using Weighted K-nearest Neighbor Linear Blending[J]. Journal of Electronics and Information Technology, 2019, 41(10): 2442-2449. doi: 10.11999/JEIT190093 shu

基于加权的K近邻线性混合显著性目标检测

    作者简介: 李炜: 女,1969年生,教授,研究方向为计算机视觉;
    李全龙: 男,1995年生,硕士生,研究方向为图像显著性检测;
    刘政怡: 女,1978年生,副教授,研究方向为计算机视觉
    通讯作者: 刘政怡,liuzywen@ahu.edu.cn
摘要: 显著性目标检测旨在于一个场景中自动检测能够引起人类注意的目标或区域,在自底向上的方法中,基于多核支持向量机(SVM)的集成学习取得了卓越的效果。然而,针对每一张要处理的图像,该方法都要重新训练,每一次训练都非常耗时。因此,该文提出一个基于加权的K近邻线性混合(WKNNLB)显著性目标检测方法:利用现有的方法来产生初始的弱显著图并获得训练样本,引入加权的K近邻(WKNN)模型来预测样本的显著性值,该模型不需要任何训练过程,仅需选择一个最优的K值和计算与测试样本最近的K个训练样本的欧式距离。为了减少选择K值带来的影响,多个加权的K近邻模型通过线性混合的方式融合来产生强的显著图。最后,将多尺度的弱显著图和强显著图融合来进一步提高检测效果。在常用的ASD和复杂的DUT-OMRON数据集上的实验结果表明了该算法在运行时间和性能上的有效性和优越性。当采用较好的弱显著图时,该算法能够取得更好的效果。

English

    1. [1]

      BORJI A and ITTI L. State-of-the-art in visual attention modeling[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 185–207. doi: 10.1109/TPAMI.2012.89

    2. [2]

      ITTI L. Automatic foveation for video compression using a neurobiological model of visual attention[J]. IEEE Transactions on Image Processing, 2004, 13(10): 1304–1318. doi: 10.1109/TIP.2004.834657

    3. [3]

      ZHANG Fan, DU Bo, and ZHANG Liangpei. Saliency-guided unsupervised feature learning for scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(4): 2175–2184. doi: 10.1109/TGRS.2014.2357078

    4. [4]

      LU Xiaoqiang, ZHENG Xiangtao, and LI Xuelong. Latent semantic minimal hashing for image retrieval[J]. IEEE Transactions on Image Processing, 2017, 26(1): 355–368. doi: 10.1109/TIP.2016.2627801

    5. [5]

      WEI Yunchao, XIAO Huaxin, SHI Honghui, et al. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7268–7277. doi: 10.1109/CVPR.2018.00759.

    6. [6]

      ZHANG Xiaoning, WANG Tiantian, QI Jinqing, et al. Progressive attention guided recurrent network for salient object detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 714–722. doi: 10.1109/CVPR.2018.00081.

    7. [7]

      CHEN Shuhan, TAN Xiuli, WANG Ben, et al. Reverse attention for salient object detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 236–252. doi: 10.1007/978-3-030-01240-3_15.

    8. [8]

      ZHANG Lu, DAI Ju, LU Huchuan, et al. A bi-directional message passing model for salient object detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1741–750. doi: 10.1109/CVPR.2018.00187.

    9. [9]

      WANG Tiantian, ZHANG Lihe, WANG Shuo, et al. Detect globally, refine locally: A novel approach to saliency detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3127–3135. doi: 10.1109/CVPR.2018.00330.

    10. [10]

      HOU Qibin, CHENG Mingming, HU Xiaowei, et al. Deeply supervised salient object detection with short connections[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(4): 815–828. doi: 10.1109/TPAMI.2018.2815688

    11. [11]

      YANG Chuan, ZHANG Lihe, LU Huchuan, et al. Saliency detection via graph-based manifold ranking[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 3166–3173. doi: 10.1109/CVPR.2013.407.

    12. [12]

      CHENG Mingming, WARRELL J, LIN Wenyan, et al. Efficient salient region detection with soft image abstraction[C]. 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 1529–1536. doi: 10.1109/ICCV.2013.193.

    13. [13]

      ZHANG Jianming, SCLAROFF S, LIN Zhe, et al. Minimum barrier salient object detection at 80 FPS[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1404–1412. doi: 10.1109/ICCV.2015.165.

    14. [14]

      BORJI A, CHENG Mingming, JIANG Huaizu, et al. Salient object detection: A benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5706–5722. doi: 10.1109/TIP.2015.2487833

    15. [15]

      TONG Na, LU Huchuan, RUAN Xiang, et al. Salient object detection via bootstrap learning[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1884–1892. doi: 10.1109/CVPR.2015.7298798.

    16. [16]

      LU Huchuan, ZHANG Xiaoning, Qi Jinqing, et al. Co-bootstrapping saliency[J]. IEEE Transactions on Image Processing, 2017, 26(1): 414–425. doi: 10.1109/TIP.2016.2627804

    17. [17]

      SONG Hangke, LIU Zhi, DU Huan, et al. Depth-aware salient object detection and segmentation via multiscale discriminative saliency fusion and bootstrap learning[J]. IEEE Transactions on Image Processing, 2017, 26(9): 4204–4216. doi: 10.1109/TIP.2017.2711277

    18. [18]

      WU Lishan, LIU Zhi, SONG Hangke, et al. RGBD co-saliency detection via multiple kernel boosting and fusion[J]. Multimedia Tools and Applications, 2018, 77(16): 21185–21199. doi: 10.1007/s11042-017-5576-y

    19. [19]

      ACHANTA R, SHAJI A, SMITH K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274–2282. doi: 10.1109/TPAMI.2012.120

    20. [20]

      OJALA T, PIETIKAINEN M, and MAENPAA T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7): 971–987. doi: 10.1109/tpami.2002.1017623

    21. [21]

      ACHANTA R, HEMAMI S, ESTRADA F, et al. Frequency-tuned salient region detection[C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 1597–1604. doi: 10.1109/CVPR.2009.5206596.

    22. [22]

      ITTI L, KOCH C, and NIEBUR E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254–1259. doi: 10.1109/34.730558

    23. [23]

      SHEN Xiaohui and WU Ying. A unified approach to salient object detection via low rank matrix recovery[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 853–860. doi: 10.1109/CVPR.2012.6247758.

    1. [1]

      李劲松, 彭建华, 刘树新, 季新生. 一种基于线性规划的有向网络链路预测方法. 电子与信息学报, 2020, 41(0): 1-9.

    2. [2]

      缪祥华, 单小撤. 基于密集连接卷积神经网络的入侵检测技术研究. 电子与信息学报, 2020, 41(0): 1-7.

    3. [3]

      蒲磊, 冯新喜, 侯志强, 余旺盛. 基于自适应背景选择和多检测区域的相关滤波算法. 电子与信息学报, 2020, 41(0): 1-7.

    4. [4]

      游凌, 李伟浩, 张文林, 王科人. 基于深度神经网络的Morse码自动译码算法. 电子与信息学报, 2020, 41(0): 1-6.

    5. [5]

      张玉磊, 陈文娟, 张永洁, 张雪微, 王彩芬. 支持关键字搜索的无证书密文等值测试加密方案. 电子与信息学报, 2020, 41(0): 1-7.

    6. [6]

      向敏, 饶华阳, 张进进, 陈梦鑫. 基于GCN的软件定义电力通信网络路由控制策略. 电子与信息学报, 2020, 42(0): 1-8.

    7. [7]

      席博, 洪涛, 张更新. 卫星物联网场景下基于节点选择的协作波束成形技术研究. 电子与信息学报, 2020, 42(0): 1-9.

  • 图 1  本文方法的框架图

    图 2  强显著模型示意图

    图 3  加权k近邻模型示意图

    图 4  m取不同值在ASD数据集上的F-measure曲线

    图 5  n取不同值在ASD数据集上的F-measure曲线

    图 6  各种方法产生的显著图的视觉对比

    图 7  各方法及其提高在ASD和DUT-OMRON数据集上的P-R曲线

    图 8  WKNNLB和BLSVM在ASD和DUT-OMRON数据集上的P-R曲线

    表 1  特征${\text{f}}_i^j$取值(65维)

    特征维度序号特征维度大小取值范围
    0~2平均RGB值3[0,1]
    3~5平均CIELab值3[0,1]
    6~64LBP直方图值59[0,1]
    下载: 导出CSV

    表 2  5种经典方法及其提高在F-度量值的对比

    算法ITIT+LRMRLRMR+GCGC+MRMR+MBDMBD+
    ASD0.3810.5420.7270.8290.8110.8480.8690.8760.8550.867
    DUT-OMRON0.3430.5410.4980.5450.5200.5540.5740.5760.8500.854
    下载: 导出CSV

    表 3  WKNNLB和BLSVM在4个数据集上F-度量和运行时间(s)对比

    ASDSED2PASCAL-SDUT-OMRON
    F-measureTimeF-measureTimeF-measureTimeF-measureTime
    WKNNLB0.82040580.7583320.65550000.53430864
    BLSVM0.81080930.7407200.651111840.52465120
    下载: 导出CSV
  • 加载中
图(8)表(3)
计量
  • PDF下载量:  68
  • 文章访问数:  820
  • HTML全文浏览量:  564
文章相关
  • 通讯作者:  刘政怡, liuzywen@ahu.edu.cn
  • 收稿日期:  2019-02-01
  • 录用日期:  2019-06-03
  • 网络出版日期:  2019-06-12
  • 刊出日期:  2019-10-01
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

/

返回文章