电子学报 ›› 2023, Vol. 51 ›› Issue (1): 105-116.DOI: 10.12263/DZXB.20210676
李豪, 袁广林, 秦晓燕, 琚长瑞, 朱虹
收稿日期:
2021-05-26
修回日期:
2021-10-18
出版日期:
2023-01-25
通讯作者:
作者简介:
基金资助:
LI Hao, YUAN Guang-lin, QIN Xiao-yan, JU Chang-rui, ZHU Hong
Received:
2021-05-26
Revised:
2021-10-18
Online:
2023-01-25
Published:
2023-02-23
Corresponding author:
Supported by:
摘要:
近年来,目标跟踪中目标的状态表示已由粗糙的矩形框转化为精细的目标掩膜.然而,现有方法利用区域分割得到目标掩膜,速度慢并且掩膜精度受限于目标跟踪框.针对以上问题,本文提出基于空间加权对数似然比相关滤波与Deep Snake的目标轮廓跟踪方法.该方法包括三个阶段:在第一阶段,利用提出的空间加权对数似然比相关滤波器估计目标的初始矩形框;在第二阶段,通过Deep Snake将初始矩形框变形为目标轮廓;在第三阶段,根据目标轮廓拟合出跟踪结果.对提出的方法在OTB(Object Tracking Benchmark)-2015和VOT(Visual Object Tracking)-2018数据集上进行了实验验证,结果表明:与现有先进的目标跟踪方法相比,本文提出的跟踪方法具有较优的性能.
中图分类号:
李豪, 袁广林, 秦晓燕, 等. 基于空间加权对数似然比相关滤波与Deep Snake的目标轮廓跟踪[J]. 电子学报, 2023, 51(1): 105-116.
Hao LI, Guang-lin YUAN, Xiao-yan QIN, et al. Object Contour Tracking Based on Correlation Filters with Spatially-Weighted Logarithm Likelihood Ratio and Deep Snake[J]. Acta Electronica Sinica, 2023, 51(1): 105-116.
跟踪方法 | Precision (pixels) | Success (AUC) | 速度 (FPS) |
---|---|---|---|
OCT-CFDS | 0.922 | 0.700 | 31 |
DaSiamRPN | 0.923 | 0.665 | 160 |
SiamRPN++ | 0.915 | 0.696 | 35 |
ECO | 0.910 | 0.691 | 8 |
MDNet | 0.909 | 0.678 | 1 |
C-COT | 0.903 | 0.673 | 0.3 |
MCPF | 0.873 | 0.628 | 3.2 |
TADT | 0.866 | 0.660 | 33.7 |
DeepSRDCF | 0.851 | 0.635 | 1 |
SRDCF | 0.789 | 0.598 | 8.2 |
表1 10种目标跟踪方法在OTB-2015数据集上的性能指标
跟踪方法 | Precision (pixels) | Success (AUC) | 速度 (FPS) |
---|---|---|---|
OCT-CFDS | 0.922 | 0.700 | 31 |
DaSiamRPN | 0.923 | 0.665 | 160 |
SiamRPN++ | 0.915 | 0.696 | 35 |
ECO | 0.910 | 0.691 | 8 |
MDNet | 0.909 | 0.678 | 1 |
C-COT | 0.903 | 0.673 | 0.3 |
MCPF | 0.873 | 0.628 | 3.2 |
TADT | 0.866 | 0.660 | 33.7 |
DeepSRDCF | 0.851 | 0.635 | 1 |
SRDCF | 0.789 | 0.598 | 8.2 |
跟踪方法 | IV | SV | OCC | DEF | MB | FM | IPR | OPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
OCT-CFDS | 0.923 | 0.903 | 0.892 | 0.941 | 0.876 | 0.927 | 0.915 | 0.919 | 0.911 | 0.943 | 0.907 |
DaSiamRPN | 0.917 | 0.874 | 0.885 | 0.940 | 0.870 | 0.894 | 0.901 | 0.914 | 0.867 | 0.924 | 0.900 |
SiamRPN++ | 0.925 | 0.905 | 0.883 | 0.913 | 0.878 | 0.929 | 0.916 | 0.920 | 0.838 | 0.903 | 0.911 |
ECO | 0.914 | 0.870 | 0.889 | 0.922 | 0.859 | 0.888 | 0.880 | 0.904 | 0.906 | 0.942 | 0.888 |
MDNet | 0.915 | 0.867 | 0.891 | 0.907 | 0.874 | 0.882 | 0.887 | 0.902 | 0.812 | 0.925 | 0.854 |
C-COT | 0.884 | 0.869 | 0.888 | 0.916 | 0.872 | 0.884 | 0.875 | 0.894 | 0.889 | 0.882 | 0.885 |
MCPF | 0.881 | 0.868 | 0.844 | 0.859 | 0.829 | 0.864 | 0.855 | 0.862 | 0.746 | 0.823 | 0.918 |
TADT | 0.864 | 0.840 | 0.849 | 0.866 | 0.822 | 0.814 | 0.850 | 0.862 | 0.802 | 0.805 | 0.890 |
DeepSRDCF | 0.791 | 0.805 | 0.808 | 0.836 | 0.811 | 0.818 | 0.811 | 0.834 | 0.794 | 0.841 | 0.702 |
SRDCF | 0.792 | 0.695 | 0.750 | 0.816 | 0.781 | 0.728 | 0.702 | 0.773 | 0.619 | 0.775 | 0.631 |
表2 10种目标跟踪方法在11个不同属性视频上的Precision指标
跟踪方法 | IV | SV | OCC | DEF | MB | FM | IPR | OPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
OCT-CFDS | 0.923 | 0.903 | 0.892 | 0.941 | 0.876 | 0.927 | 0.915 | 0.919 | 0.911 | 0.943 | 0.907 |
DaSiamRPN | 0.917 | 0.874 | 0.885 | 0.940 | 0.870 | 0.894 | 0.901 | 0.914 | 0.867 | 0.924 | 0.900 |
SiamRPN++ | 0.925 | 0.905 | 0.883 | 0.913 | 0.878 | 0.929 | 0.916 | 0.920 | 0.838 | 0.903 | 0.911 |
ECO | 0.914 | 0.870 | 0.889 | 0.922 | 0.859 | 0.888 | 0.880 | 0.904 | 0.906 | 0.942 | 0.888 |
MDNet | 0.915 | 0.867 | 0.891 | 0.907 | 0.874 | 0.882 | 0.887 | 0.902 | 0.812 | 0.925 | 0.854 |
C-COT | 0.884 | 0.869 | 0.888 | 0.916 | 0.872 | 0.884 | 0.875 | 0.894 | 0.889 | 0.882 | 0.885 |
MCPF | 0.881 | 0.868 | 0.844 | 0.859 | 0.829 | 0.864 | 0.855 | 0.862 | 0.746 | 0.823 | 0.918 |
TADT | 0.864 | 0.840 | 0.849 | 0.866 | 0.822 | 0.814 | 0.850 | 0.862 | 0.802 | 0.805 | 0.890 |
DeepSRDCF | 0.791 | 0.805 | 0.808 | 0.836 | 0.811 | 0.818 | 0.811 | 0.834 | 0.794 | 0.841 | 0.702 |
SRDCF | 0.792 | 0.695 | 0.750 | 0.816 | 0.781 | 0.728 | 0.702 | 0.773 | 0.619 | 0.775 | 0.631 |
跟踪方法 | IV | SV | OCC | DEF | MB | FM | IPR | OPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
OCT-CFDS | 0.716 | 0.693 | 0.663 | 0.717 | 0.681 | 0.694 | 0.685 | 0.696 | 0.678 | 0.703 | 0.660 |
DaSiamRPN | 0.642 | 0.637 | 0.619 | 0.691 | 0.640 | 0.643 | 0.650 | 0.658 | 0.639 | 0.660 | 0.623 |
SiamRPN++ | 0.713 | 0.690 | 0.655 | 0.710 | 0.674 | 0.691 | 0.682 | 0.692 | 0.648 | 0.691 | 0.658 |
ECO | 0.713 | 0.664 | 0.657 | 0.716 | 0.680 | 0.652 | 0.652 | 0.677 | 0.681 | 0.700 | 0.617 |
MDNet | 0.689 | 0.651 | 0.652 | 0.699 | 0.673 | 0.637 | 0.647 | 0.665 | 0.631 | 0.676 | 0.591 |
C-COT | 0.682 | 0.651 | 0.647 | 0.698 | 0.680 | 0.639 | 0.632 | 0.658 | 0.665 | 0.652 | 0.619 |
MCPF | 0.628 | 0.620 | 0.595 | 0.620 | 0.573 | 0.611 | 0.598 | 0.607 | 0.558 | 0.601 | 0.598 |
TADT | 0.676 | 0.641 | 0.630 | 0.678 | 0.656 | 0.614 | 0.630 | 0.650 | 0.631 | 0.622 | 0.646 |
DeepSRDCF | 0.621 | 0.601 | 0.581 | 0.647 | 0.632 | 0.595 | 0.584 | 0.612 | 0.583 | 0.627 | 0.474 |
SRDCF | 0.613 | 0.535 | 0.556 | 0.625 | 0.608 | 0.540 | 0.524 | 0.573 | 0.487 | 0.583 | 0.480 |
表3 10种目标跟踪方法在11个不同属性视频上的Success指标
跟踪方法 | IV | SV | OCC | DEF | MB | FM | IPR | OPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
OCT-CFDS | 0.716 | 0.693 | 0.663 | 0.717 | 0.681 | 0.694 | 0.685 | 0.696 | 0.678 | 0.703 | 0.660 |
DaSiamRPN | 0.642 | 0.637 | 0.619 | 0.691 | 0.640 | 0.643 | 0.650 | 0.658 | 0.639 | 0.660 | 0.623 |
SiamRPN++ | 0.713 | 0.690 | 0.655 | 0.710 | 0.674 | 0.691 | 0.682 | 0.692 | 0.648 | 0.691 | 0.658 |
ECO | 0.713 | 0.664 | 0.657 | 0.716 | 0.680 | 0.652 | 0.652 | 0.677 | 0.681 | 0.700 | 0.617 |
MDNet | 0.689 | 0.651 | 0.652 | 0.699 | 0.673 | 0.637 | 0.647 | 0.665 | 0.631 | 0.676 | 0.591 |
C-COT | 0.682 | 0.651 | 0.647 | 0.698 | 0.680 | 0.639 | 0.632 | 0.658 | 0.665 | 0.652 | 0.619 |
MCPF | 0.628 | 0.620 | 0.595 | 0.620 | 0.573 | 0.611 | 0.598 | 0.607 | 0.558 | 0.601 | 0.598 |
TADT | 0.676 | 0.641 | 0.630 | 0.678 | 0.656 | 0.614 | 0.630 | 0.650 | 0.631 | 0.622 | 0.646 |
DeepSRDCF | 0.621 | 0.601 | 0.581 | 0.647 | 0.632 | 0.595 | 0.584 | 0.612 | 0.583 | 0.627 | 0.474 |
SRDCF | 0.613 | 0.535 | 0.556 | 0.625 | 0.608 | 0.540 | 0.524 | 0.573 | 0.487 | 0.583 | 0.480 |
OCT-CFDS | AFOD | D3S | SiamRPN++ | ATOM | DaSiamRPN | LADCF | SiamMask | SPM | ASRCF | STRCF | |
---|---|---|---|---|---|---|---|---|---|---|---|
EAO↑ | 0.506 | 0.491 | 0.489 | 0.414 | 0.401 | 0.383 | 0.389 | 0.380 | 0.338 | 0.328 | 0.345 |
Accuracy↑ | 0.67 | 0.634 | 0.64 | 0.60 | 0.59 | 0.59 | 0.51 | 0.61 | 0.58 | 0.49 | 0.523 |
Robustness↓ | 0.113 | 0.117 | 0.150 | 0.234 | 0.204 | 0.276 | 0.159 | 0.276 | 0.300 | 0.234 | 0.215 |
速度(FPS)↑ | 31 | 20 | 25 | 35 | 30 | 160 | 10.8 | 55 | 120 | 28 | 2.9 |
表4 11种目标跟踪方法在VOT-2018数据集上的性能指标
OCT-CFDS | AFOD | D3S | SiamRPN++ | ATOM | DaSiamRPN | LADCF | SiamMask | SPM | ASRCF | STRCF | |
---|---|---|---|---|---|---|---|---|---|---|---|
EAO↑ | 0.506 | 0.491 | 0.489 | 0.414 | 0.401 | 0.383 | 0.389 | 0.380 | 0.338 | 0.328 | 0.345 |
Accuracy↑ | 0.67 | 0.634 | 0.64 | 0.60 | 0.59 | 0.59 | 0.51 | 0.61 | 0.58 | 0.49 | 0.523 |
Robustness↓ | 0.113 | 0.117 | 0.150 | 0.234 | 0.204 | 0.276 | 0.159 | 0.276 | 0.300 | 0.234 | 0.215 |
速度(FPS)↑ | 31 | 20 | 25 | 35 | 30 | 160 | 10.8 | 55 | 120 | 28 | 2.9 |
OCT-CFDS | D3S | SiamMask | |
---|---|---|---|
mIOU | 0.68 | 0.66 | 0.63 |
表5 3种目标跟踪方法在VOT-2018数据集上的mIOU指标
OCT-CFDS | D3S | SiamMask | |
---|---|---|---|
mIOU | 0.68 | 0.66 | 0.63 |
跟踪方法 | 骨干网络 | 输入图像尺寸 | 计算复杂度(FLOPs) |
---|---|---|---|
OCT-CFDS | DLA-34 | 255×255 | 5.64×109 |
SiamMask | Resnet-50 | 255×255 | 7.23×109 |
D3S | Resnet-50 | 255×255 | 6.81×109 |
AFOD | Resnet-50 | 255×255 | 6.97×109 |
表6 4种目标跟踪方法的计算复杂度
跟踪方法 | 骨干网络 | 输入图像尺寸 | 计算复杂度(FLOPs) |
---|---|---|---|
OCT-CFDS | DLA-34 | 255×255 | 5.64×109 |
SiamMask | Resnet-50 | 255×255 | 7.23×109 |
D3S | Resnet-50 | 255×255 | 6.81×109 |
AFOD | Resnet-50 | 255×255 | 6.97×109 |
1 | BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 2544-2550. |
2 | HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596. |
3 | DANELLJAN M, HÄGER G, SHAHBAZ KHAN F, et al. Accurate scale estimation for robust visual tracking[C]//Proceedings of the British Machine Vision Conference 2014. Nottingham: BMVA Press, 2014: 1-11 |
4 | DANELLJAN M, ROBINSON A, KHAN F S, et al. Beyond correlation filters: Learning continuous convolution operators for visual tracking[C]//European Conference on Computer Vision. Cham: Springer, 2016: 472-488. |
5 | DANELLJAN M, BHAT G, KHAN F S, et al. ECO: Efficient convolution operators for tracking[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6931-6939. |
6 | BHAT G, JOHNANDER J, DANELLJAN M, et al. Unveiling the power of deep tracking[C]//European Conference on Computer Vision. Cham: Springer, 2018: 493-509. |
7 | DANELLJAN M, KHAN F S, FELSBERG M, et al. Adaptive color attributes for real-time visual tracking[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 1090-1097. |
8 | DANELLJAN M, HÄGER G, KHAN F S, et al. Convolutional features for correlation filter based visual tracking[C]//2015 IEEE International Conference on Computer Vision Workshop. Piscataway: IEEE, 2015: 621-629. |
9 | BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: Complementary learners for real-time tracking[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1401-1409. |
10 | ZHANG T Z, XU C S, YANG M H. Multi-task correlation particle filter for robust object tracking[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 4819-4827. |
11 | DANELLJAN M, HÄGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]//2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 4310-4318. |
12 | XU T Y, FENG Z H, WU X J, et al. Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual object tracking[J]. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2019, 28(11): 5596-5609. |
13 | DAI K N, WANG D, LU H C, et al. Visual tracking via adaptive spatially-regularized correlation filters[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 4665-4674. |
14 | LI F, TIAN C, ZUO W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4904-4913. |
15 | DANELLJAN M, BHAT G, KHAN F S, et al. ATOM: Accurate tracking by overlap maximization[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 4655-4664. |
16 | 蒲磊, 冯新喜, 侯志强, 等. 基于深度空间正则化的相关滤波跟踪算法[J]. 电子学报, 2020, 48(10): 2025-2032. |
PU L, FENG X X, HOU Z Q, et al. Correlation filter tracking based on deep spatial regularization[J]. Acta Electronica Sinica, 2020, 48(10): 2025-2032. (in Chinese) | |
17 | TAO R, GAVVES E, SMEULDERS A W M. Siamese instance search for tracking[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1420-1429. |
18 | BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]//European Conference on Computer Vision. Cham: Springer, 2016: 850-865. |
19 | LI B, YAN J J, WU W, et al. High performance visual tracking with Siamese region proposal network[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8971-8980. |
20 | ZHU Z, WANG Q, LI B, et al. Distractor-aware siamese networks for visual object tracking[C]//European Conference on Computer Vision. Cham: Springer, 2018: 103-119. |
21 | LI B, WU W, WANG Q, et al. SiamRPN: Evolution of Siamese visual tracking with very deep networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 4277-4286. |
22 | ZHANG Z P, PENG H W, FU J L, et al. Ocean: Object-aware anchor-free tracking[C]//European Conference on Computer Vision. Cham: Springer, 2020: 771-787. |
23 | WANG G T, LUO C, XIONG Z W, et al. SPM-tracker: Series-parallel matching for real-time visual object tracking[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 3638-3647. |
24 | CHENG J C, TSAI Y H, HUNG W C, et al. Fast and accurate online video object segmentation via tracking parts[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7415-7424. |
25 | WANG Q, ZHANG L, BERTINETTO L, et al. Fast online object tracking and segmentation: A unifying approach[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 1328-1338. |
26 | CHEN X, LI Z X, YUAN Y, et al. State-aware tracker for real-time video object segmentation[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 9381-9390. |
27 | SUN M J, XIAO J M, LIM E G, et al. Fast template matching and update for video object tracking and segmentation[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 10788-10796. |
28 | LUKEŽIČ A, MATAS J, KRISTAN M. D3S—A discriminative single shot segmentation tracker[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 7131-7140. |
29 | CHEN Y W, XU J T, YU J Q, et al. AFOD: Adaptive focused discriminative segmentation tracker[C]//European Conference on Computer Vision. Cham: Springer, 2020: 666-682. |
30 | NAM H, HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 4293-4302. |
31 | LI X, MA C, WU B Y, et al. Target-aware deep tracking[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2019: 1369-1378. |
32 | PENG S D, JIANG W, PI H J, et al. Deep snake for real-time instance segmentation[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 8530-8539. |
33 | WU Y, LIM J, YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834-1848. |
34 | KRISTAN M, LEONARDIS A, MATAS J, et al. The sixth visual object tracking VOT 2018 challenge results[C]//European Conference on Computer Vision. Cham: Springer, 2019: 3-53. |
35 | ZHOU X, WANG D, KRÄHENBÜHL P. Objects as points [EB/OL]. (2019-04-16)[2022-05-26]. . |
36 | XIE E Z, SUN P Z, SONG X G, et al. PolarMask: Single shot instance segmentation with polar representation[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 12190-12199. |
[1] | 谢青松, 刘晓庆, 安志勇, 李博. 基于前景优化的视觉目标跟踪算法[J]. 电子学报, 2022, 50(7): 1558-1566. |
[2] | 付利华, 赵宇, 姜涵煦, 赵茹, 吴会贤, 闫绍兴. 基于前景感知视觉注意的半监督视频目标分割[J]. 电子学报, 2022, 50(1): 195-206. |
[3] | 王孟依, 陈振兴, 陈智慧. 基于三级LLR检测的广义零填充三模OFDM索引调制系统研究[J]. 电子学报, 2021, 49(7): 1291-1297. |
[4] | 侯利明, 连峰, 谭顺成, 徐从安. 闪烁噪声统计特性未知情况下的鲁棒广义标签多伯努利滤波器[J]. 电子学报, 2021, 49(7): 1346-1353. |
[5] | 陈丹, 姚伯羽. 运动模型引导的自适应核相关目标跟踪方法[J]. 电子学报, 2021, 49(3): 550-558. |
[6] | 罗大鹏, 杜国庆, 曾志鹏, 魏龙生, 高常鑫, 陈应, 肖菲, 罗琛. 基于少量样本学习的多目标检测跟踪方法[J]. 电子学报, 2021, 49(1): 183-191. |
[7] | 马少雄, 邱实, 唐颖, 张晓. 基于工地场景的深度学习目标跟踪算法[J]. 电子学报, 2020, 48(9): 1665-1671. |
[8] | 仇祝令, 查宇飞, 吴敏, 王青. 基于注意力学习的正则化相关滤波跟踪算法[J]. 电子学报, 2020, 48(9): 1762-1768. |
[9] | 蒲磊, 冯新喜, 侯志强, 余旺盛. 基于二阶池化网络的鲁棒视觉跟踪算法[J]. 电子学报, 2020, 48(8): 1472-1478. |
[10] | 田增山, 廉颖慧, 周牧, 李泽, 金悦. 基于Wi-Fi多维参数特征的无源目标跟踪技术[J]. 电子学报, 2020, 48(8): 1572-1579. |
[11] | 付利华, 赵宇, 孙晓威, 卢中山, 王丹, 杨寒雪. 基于孪生网络的快速视频目标分割[J]. 电子学报, 2020, 48(4): 625-630. |
[12] | 陈辉, 贺忠良, 邓东明, 李国财. 高斯混合多伯努利滤波器基于柯西施瓦兹散度的传感器控制方法[J]. 电子学报, 2020, 48(4): 706-716. |
[13] | 叶瑾, 许枫, 杨娟, 王佳维. 一种基于多传感器的复合量测IMM-EKF数据融合算法[J]. 电子学报, 2020, 48(12): 2326-2330. |
[14] | 蒲磊, 冯新喜, 侯志强, 查宇飞, 余旺盛. 基于深度空间正则化的相关滤波跟踪算法[J]. 电子学报, 2020, 48(10): 2025-2032. |
[15] | 王树亮, 毕大平, 阮怀林, 杜明洋, 潘继飞. 基于信息熵准则的认知雷达机动目标跟踪算法[J]. 电子学报, 2019, 47(6): 1277-1284. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||