Visual attention is one of the research hotspots in the field of machine vision,which is positive significance for development of target detection and target tracking.This paper presents a particle filter based visual attention model that is applied to detect moving target.Firstly,according to Bayes estimation theory,the method of particle weight calculation is established by visual bidirectional (Top-Down/Bottom-Up) fusion attention.Then motion attention and target color attention are adopted as input of the attention model,and moving target saliency is calculated through the importance sampling,particle weight calculation,resampling and particle saliency map processing.Lastly,the target position is determined by distribution of particle.Different video complex scene test results show that this method is more effective and accurate than the traditional method for detection of moving target.
刘龙, 樊波阳, 刘金星, 杨乐超. 面向运动目标检测的粒子滤波视觉注意力模型[J]. 电子学报, 2016, 44(9): 2235-2241.
LIU Long, FAN Bo-yang, LIU Jin-xing, YANG Le-chao. Particle Filtering Based Visual Attention Model for Moving Target Detection. Acta Electronica Sinica, 2016, 44(9): 2235-2241.
[1] Itti L,Koch C.Computational modeling of visual attention[J].Nature Reviews Neuroscience,2001,2(3):193-203.
[2] Olivier L M,Patrick L C.A coherent computational approach to model bottom-up visual attention[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(5):802-817.
[3] Hou X D,Zhang L Q.Saliency Detection:A spec-tral residual approach[A].IEEE Conference on Computer Vision and Pattern Recognition[C].Minneapolis:IEEE,2007.1-8.
[4] Rapantzikos K,Tsapatsoulis N,Avrithis Y,et al.Bottom-up spatiotemporal visual attention model for video analysis[J].Image Processing,2007,1(2):237-248.
[5] Yan Q,Xu L,Shi J P,et al.Hierarchical saliency detection[A].IEEE Conference on Computer Vi-sion and Pattern Recognition[C].Portland:IEEE,2013.1155-1162.
[6] Kang W L,Hilary B,Feng J F.Cue-guided search:a computational model of selective attention[J].IEEE Transactions on Neural Networks,2005,16(4):910-924.
[7] Zhang L Y,Tim K Marks,Matthew H Tong,et al.SUN:A bayesian framework for saliency using natural statistics[J].Journal of Vision,2008,7(8):1-20.
[8] Sang W B,Bumhwi K,Minho L.Top-down visual selective attention model combined with bottom-up saliency map for incremental perception[A].IEEE International Conference on Neural Networks[C].Barcelona:IEEE,2010.1-8.
[9] Fang Y M,Lin W S,Lau C T,et al.A visual attention model combining top-down and bottom-up mechanisms for salient object detection[A].IEEE International Conference on Acoustics,Speech and Signal Processing[C].Singapore:IEEE,2011.1293-1296.
[10] Borji A.Boosting bottom-up and top-down visual features for saliency estimation[A].IEEE Conference on Computer Vision and Pattern Recognition[C].Providence:IEEE,2012.438-445.
[11] Yu Y L,George K I Mann,et al.An object-based visual attention model for robotic applications[J].IEEE Transactions on System,2010,40(5):1398-1412.
[12] 郎丛妍,须德,李冰.一种基于动态贝叶斯网的视觉注意力是别方法[J].电子学报,2011,39(3A):140-146. Lang Cong-yan,Xu De,Li Bing.Dynamic Bayesian network based visual focus of attention recognition[J].Acta Electronica Sinica,2011,39(3A):140-146.(in Chinese)
[13] Ma Y F,Zhang H J.A model of motion attention for video skimming[A].2002 International Conference on Image Processing[C].New York:IEEE,2002.129-132.
[14] Zhou C.A visual attention model for dynamic scenes based on motion features[A].IEEE International Conference on Control Automation Robotics & Vision[C].Guangzhou:IEEE,2012.1396-1401.
[15] Han J W.Object segmentation from consumer videos:a unified framework based on visual atention[J].IEEE Transactions on Consumer Electronics,2009,55(3):1597-1605.
[16] Gordon N,Salmond D J,Smith A F M.Novel approach to nonlinear/non-Gaussian Bayesian state estimation[J].IEEE Proceedings F:Radar and Signal Processing,1993,140(2):107-113.