电子学报 ›› 2022, Vol. 50 ›› Issue (8): 1905-1916.DOI: 10.12263/DZXB.20211613
收稿日期:
2021-12-03
修回日期:
2022-02-07
出版日期:
2022-08-25
通讯作者:
作者简介:
基金资助:
KONG Wei, LIU Yun, LI Hui(), CUI Xue-hong, YANG Hao-ran
Received:
2021-12-03
Revised:
2022-02-07
Online:
2022-08-25
Published:
2022-09-08
Corresponding author:
Supported by:
摘要:
由于行人交互的复杂性和周围环境的多变性,行人轨迹预测仍是一项具有挑战性的任务.然而,基于图结构的方法建模行人之间的交互时,存在着网络感受野小、成对行人间的相互交互对称、固定的图结构不能适应场景变化的问题,导致预测轨迹与真实轨迹偏差较大.为了解决这些问题,本文提出一种基于全局自适应有向图的行人轨迹预测方法(pedestrian trajectory prediction method based on Global Adaptive Directed Graph,GADG).设计全局特征更新(Global Feature Updating,GFU)和全局特征选择(Global Feature Selection,GFS)分别提升空间域和时间域的网络感受范围,以获取全局交互特征.构建有向特征图,定义行人间的不对称交互,提高网络建模的方向性.建立自适应图模型,灵活调整行人间的交互关系,减少冗余连接,增强图模型的自适应能力.在ETH和UCY数据集上的实验结果表明,与最优值相比,平均位移误差降低14%,最终位移误差降低3%.
中图分类号:
孔玮, 刘云, 李辉, 等. 基于全局自适应有向图的行人轨迹预测[J]. 电子学报, 2022, 50(8): 1905-1916.
Wei KONG, Yun LIU, Hui LI, et al. Pedestrian Trajectory Prediction Based on Global Adaptive Directed Graph[J]. Acta Electronica Sinica, 2022, 50(8): 1905-1916.
数据集 | 场景 | 帧数 | 人数 | 分组数 | 障碍物数 |
---|---|---|---|---|---|
ETH | ETH | 1 448 | 360 | 243 | 44 |
HOTEL | 1 168 | 390 | 623 | 25 | |
UCY | UNIV | 541 | 434 | 297 | 16 |
ZARA1 | 866 | 148 | 91 | 34 | |
ZARA2 | 1 052 | 204 | 140 | 34 |
表1 ETH/UCY数据集
数据集 | 场景 | 帧数 | 人数 | 分组数 | 障碍物数 |
---|---|---|---|---|---|
ETH | ETH | 1 448 | 360 | 243 | 44 |
HOTEL | 1 168 | 390 | 623 | 25 | |
UCY | UNIV | 541 | 434 | 297 | 16 |
ZARA1 | 866 | 148 | 91 | 34 | |
ZARA2 | 1 052 | 204 | 140 | 34 |
评估指标 | 图卷积层数l(h=4,k=20) | 图注意力头数h(l=2,k=20) | 预测次数k(l=2,h=4) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 5 | 2 | 4 | 8 | 16 | 5 | 10 | 15 | 20 | 25 | |
ADE↓ | 0.21 | 0.20 | 0.21 | 0.22 | 0.22 | 0.20 | 0.21 | 0.21 | 0.21 | 0.22 | 0.21 | 0.20 | 0.23 |
FDE↓ | 0.42 | 0.40 | 0.41 | 0.43 | 0.42 | 0.40 | 0.41 | 0.42 | 0.42 | 0.42 | 0.42 | 0.40 | 0.44 |
表2 图卷积层数l、图注意力头数h和预测次数k的消融实验
评估指标 | 图卷积层数l(h=4,k=20) | 图注意力头数h(l=2,k=20) | 预测次数k(l=2,h=4) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 5 | 2 | 4 | 8 | 16 | 5 | 10 | 15 | 20 | 25 | |
ADE↓ | 0.21 | 0.20 | 0.21 | 0.22 | 0.22 | 0.20 | 0.21 | 0.21 | 0.21 | 0.22 | 0.21 | 0.20 | 0.23 |
FDE↓ | 0.42 | 0.40 | 0.41 | 0.43 | 0.42 | 0.40 | 0.41 | 0.42 | 0.42 | 0.42 | 0.42 | 0.40 | 0.44 |
评估 指标 | 基线 | 亲密度函数(增残差前) | 增残差后 (嵌入高斯) | |||
---|---|---|---|---|---|---|
高斯 函数 | 嵌入 高斯 | 点乘 函数 | 拼接 函数 | |||
ADE↓ | 0.20 | 0.22 | 0.20 | 0.21 | 0.22 | 0.18 |
FDE↓ | 0.40 | 0.45 | 0.40 | 0.44 | 0.44 | 0.37 |
表3 亲密度函数与LSTM上残差连接的消融实验
评估 指标 | 基线 | 亲密度函数(增残差前) | 增残差后 (嵌入高斯) | |||
---|---|---|---|---|---|---|
高斯 函数 | 嵌入 高斯 | 点乘 函数 | 拼接 函数 | |||
ADE↓ | 0.20 | 0.22 | 0.20 | 0.21 | 0.22 | 0.18 |
FDE↓ | 0.40 | 0.45 | 0.40 | 0.44 | 0.44 | 0.37 |
评估 指标 | 特征融合的空间注意力 | |||
---|---|---|---|---|
距离 | 距离+速度 | 距离+方向 | 距离+速度+方向 | |
ADE↓ | 0.180 5 | 0.179 9 | 0.178 7 | 0.178 0 |
FDE↓ | 0.371 3 | 0.370 4 | 0.369 3 | 0.367 7 |
表4 行人间的距离、速度和方向对预测性能的影响
评估 指标 | 特征融合的空间注意力 | |||
---|---|---|---|---|
距离 | 距离+速度 | 距离+方向 | 距离+速度+方向 | |
ADE↓ | 0.180 5 | 0.179 9 | 0.178 7 | 0.178 0 |
FDE↓ | 0.371 3 | 0.370 4 | 0.369 3 | 0.367 7 |
评估 指标 | 时域注意力TAM | |||
---|---|---|---|---|
2 | 4 | 8 | 16 | |
ADE↓ | 0.21 | 0.19 | 0.17 | 0.20 |
FDE↓ | 0.42 | 0.37 | 0.36 | 0.37 |
表5 时域注意力头数的设置实验
评估 指标 | 时域注意力TAM | |||
---|---|---|---|---|
2 | 4 | 8 | 16 | |
ADE↓ | 0.21 | 0.19 | 0.17 | 0.20 |
FDE↓ | 0.42 | 0.37 | 0.36 | 0.37 |
基线 | GFU | Res | SA | TAM | APL | GFS | ADE↓ | FDE↓ |
---|---|---|---|---|---|---|---|---|
√ | 0.291 7 | 0.604 3 | ||||||
√ | √ | √ | 0.264 9 | 0.574 3 | ||||
√ | √ | √ | √ | 0.264 5 | 0.574 1 | |||
√ | √ | √ | √ | √ | 0.264 2 | 0.573 9 | ||
√ | √ | √ | √ | √ | √ | 0.254 6 | 0.564 8 | |
√ | √ | √ | √ | √ | √ | √ | 0.251 5 | 0.551 3 |
表6 各个模块的消融实验
基线 | GFU | Res | SA | TAM | APL | GFS | ADE↓ | FDE↓ |
---|---|---|---|---|---|---|---|---|
√ | 0.291 7 | 0.604 3 | ||||||
√ | √ | √ | 0.264 9 | 0.574 3 | ||||
√ | √ | √ | √ | 0.264 5 | 0.574 1 | |||
√ | √ | √ | √ | √ | 0.264 2 | 0.573 9 | ||
√ | √ | √ | √ | √ | √ | 0.254 6 | 0.564 8 | |
√ | √ | √ | √ | √ | √ | √ | 0.251 5 | 0.551 3 |
数据集 | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | 平均值 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
评估指标 | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ |
Linear*[ | 1.33 | 2.94 | 0.39 | 0.72 | 0.82 | 1.59 | 0.62 | 1.21 | 0.77 | 1.48 | 0.79 | 1.59 |
SR-LSTM*[ | 1.25 | 0.37 | 0.74 | 0.51 | 1.10 | 0.41 | 0.90 | 0.32 | 0.70 | 0.45 | 0.94 | |
Social LSTM[ | 1.09 | 2.35 | 0.79 | 1.76 | 0.67 | 1.40 | 0.47 | 1.00 | 0.56 | 1.17 | 0.72 | 1.54 |
Social GAN[ | 0.87 | 1.62 | 0.67 | 1.37 | 0.76 | 1.52 | 0.35 | 0.68 | 0.42 | 0.84 | 0.61 | 1.21 |
SoPhie[ | 0.70 | 1.43 | 0.76 | 1.67 | 0.54 | 1.24 | 0.63 | 0.38 | 0.78 | 0.54 | 1.15 | |
CGNS[ | 1.40 | 0.70 | 0.93 | 1.22 | 0.35 | 0.71 | 0.49 | 0.97 | ||||
PIF[ | 0.73 | 1.65 | 0.60 | 1.27 | 0.38 | 0.81 | 0.31 | 0.68 | 0.46 | 1.00 | ||
STSGN[ | 0.75 | 1.63 | 0.63 | 1.01 | 0.65 | 0.48 | 0.99 | |||||
GAT[ | 0.68 | 1.29 | 0.68 | 1.40 | 0.57 | 1.29 | 0.37 | 0.75 | 0.52 | 1.07 | ||
Social-BiGAT[ | 0.69 | 1.29 | 0.49 | 1.01 | 0.55 | 1.32 | 0.62 | 0.36 | 0.75 | 0.48 | 1.00 | |
Social-STGCNN[ | 0.64 | 0.49 | 0.85 | 0.34 | 0.30 | |||||||
STGAT[ | 0.65 | 0.52 | 1.10 | 0.34 | 0.69 | 0.60 | ||||||
GADG | 0.61 |
表7 在ETH/UCY数据集上的实验结果比较
数据集 | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | 平均值 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
评估指标 | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ | ADE↓ | FDE↓ |
Linear*[ | 1.33 | 2.94 | 0.39 | 0.72 | 0.82 | 1.59 | 0.62 | 1.21 | 0.77 | 1.48 | 0.79 | 1.59 |
SR-LSTM*[ | 1.25 | 0.37 | 0.74 | 0.51 | 1.10 | 0.41 | 0.90 | 0.32 | 0.70 | 0.45 | 0.94 | |
Social LSTM[ | 1.09 | 2.35 | 0.79 | 1.76 | 0.67 | 1.40 | 0.47 | 1.00 | 0.56 | 1.17 | 0.72 | 1.54 |
Social GAN[ | 0.87 | 1.62 | 0.67 | 1.37 | 0.76 | 1.52 | 0.35 | 0.68 | 0.42 | 0.84 | 0.61 | 1.21 |
SoPhie[ | 0.70 | 1.43 | 0.76 | 1.67 | 0.54 | 1.24 | 0.63 | 0.38 | 0.78 | 0.54 | 1.15 | |
CGNS[ | 1.40 | 0.70 | 0.93 | 1.22 | 0.35 | 0.71 | 0.49 | 0.97 | ||||
PIF[ | 0.73 | 1.65 | 0.60 | 1.27 | 0.38 | 0.81 | 0.31 | 0.68 | 0.46 | 1.00 | ||
STSGN[ | 0.75 | 1.63 | 0.63 | 1.01 | 0.65 | 0.48 | 0.99 | |||||
GAT[ | 0.68 | 1.29 | 0.68 | 1.40 | 0.57 | 1.29 | 0.37 | 0.75 | 0.52 | 1.07 | ||
Social-BiGAT[ | 0.69 | 1.29 | 0.49 | 1.01 | 0.55 | 1.32 | 0.62 | 0.36 | 0.75 | 0.48 | 1.00 | |
Social-STGCNN[ | 0.64 | 0.49 | 0.85 | 0.34 | 0.30 | |||||||
STGAT[ | 0.65 | 0.52 | 1.10 | 0.34 | 0.69 | 0.60 | ||||||
GADG | 0.61 |
方法 | 推断时间/秒 |
---|---|
Social-LSTM[ | 1.473 6 |
SR-LSTM[ | 0.197 3 |
Social GAN[ | 0.121 0 |
PIF[ | 0.143 1 |
Social-STGCNN[ | 0.002 5 |
STGAT[ | 0.031 0 |
Introvert[ | 0.120 0 |
GADG | 0.012 7 |
表8 推断时间比较
方法 | 推断时间/秒 |
---|---|
Social-LSTM[ | 1.473 6 |
SR-LSTM[ | 0.197 3 |
Social GAN[ | 0.121 0 |
PIF[ | 0.143 1 |
Social-STGCNN[ | 0.002 5 |
STGAT[ | 0.031 0 |
Introvert[ | 0.120 0 |
GADG | 0.012 7 |
1 | 孔玮, 刘云, 李辉, 等. 基于深度学习的行人轨迹预测方法综述[J]. 控制与决策, 2021, 36(12): 2841-2850. |
KONGW, LIUY, LIH, et al. Survey of pedestrian trajectory prediction methods based on deep learning[J]. Control and Decision, 2021, 36(12): 2841-2850. (in Chinese) | |
2 | WUP X, CHENS H, METAXASD N. MotionNet: Joint perception and motion prediction for autonomous driving based on bird's eye view maps[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 11382-11392. |
3 | LUOY F, CAIP P, BERAA, et al. PORCA: Modeling and planning for autonomous driving among many pedestrians[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 3418-3425. |
4 | HUT, LONGC J, XIAOC X. A novel visual representation on text using diverse conditional GAN for visual recognition[J]. IEEE Transactions on Image Processing, 2021, 30: 3499-3512. |
5 | SALEHF, ALIAKBARIANS, SALZMANNM, et al. ArTIST: Autoregressive trajectory inpainting and scoring for tracking[EB/OL]. (2020-04-16)[2021-12]. . |
6 | LIANGJ W, JIANGL, NIEBLESJ C, et al. Peeking into the future: Predicting future person activities and locations in videos[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, CA, USA: IEEE, 2019: 5718-5727. |
7 | 张顺, 龚怡宏, 王进军. 深度卷积神经网络的发展及其在计算机视觉领域的应用[J]. 计算机学报, 2019, 42(3): 453-482. |
ZHANGS, GONGY H, WANGJ J. The development of deep convolution neural network and its applications on computer vision[J]. Chinese Journal of Computers, 2019, 42(3): 453-482. (in Chinese) | |
8 | ALAHIA, GOELK, RAMANATHANV, et al. Social LSTM: Human trajectory prediction in crowded spaces[C]//2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas, NV, USA: IEEE, 2016: 961-971. |
9 | GUPTAA, JOHNSONJ, LIF F, et al. Social GAN: Socially acceptable trajectories with generative adversarial networks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Salt Lake City, UT, USA: IEEE, 2018: 2255-2264. |
10 | SUNH, ZHAOZ Q, HEZ H. Reciprocal learning networks for human trajectory prediction[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 7414-7423. |
11 | YANGB, YANG C, WANGP, et al. TPPO: A novel trajectory predictor with pseudo oracle[EB/OL]. (2020-02-04)[2021-12]. . |
12 | SADEGHIANA, KOSARAJUV, SADEGHIANA, et al. SoPhie: An attentive GAN for predicting paths compliant to social and physical constraints[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, CA, USA: IEEE, 2019: 1349-1358. |
13 | IVANOVICB, PAVONEM. The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). Seoul, Korea: IEEE, 2019: 2375-2384. |
14 | HUANGY F, BIH K, LIZ X, et al. STGAT: Modeling spatial-temporal interactions for human trajectory prediction[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). Seoul, Korea: IEEE, 2019: 6271-6280. |
15 | MOHAMEDA, QIANK, ELHOSEINYM, et al. Social-STGCNN: A social spatio-temporal graph convolutional neural network for human trajectory prediction[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 14412-14420. |
16 | WANGC X, CAIS F, TANG. GraphTCN: Spatio-temporal interaction modeling for human trajectory prediction[C]//2021 IEEE Winter Conference on Applications of Computer Vision(WACV). Waikoloa: IEEE, 2021: 3449-3458. |
17 | 毛琳, 巩欣飞, 杨大伟, 等. 空时社交关系池化行人轨迹预测模型[J]. 计算机辅助设计与图形学学报, 2020, 32(12): 1918-1925. |
MAOL, GONGX F, YANGD W, et al. Space-time social relationship pooling pedestrian trajectory prediction model[J]. Journal of Computer-Aided Design & Computer Graphics, 2020, 32(12): 1918-1925. (in Chinese) | |
18 | KOSARAJUV, SADEGHIANA, MARTÍN-MARTÍNR, et al. Social-BiGAT: Multimodal trajectory forecasting using bicycle-GAN and graph attention networks[C]//33rd Annual Conference on Neural Information Processing Systems(NIPS). Vancouver, BC, Canada: NIPS, 2019: 1-10. |
19 | HADDADS, WUM Q, WEIH, et al. Situation-aware pedestrian trajectory prediction with spatio-temporal attention model[EB/OL]. (2019-02-13)[2021-12]. . |
20 | YUC J, MAX, RENJ W, et al. Spatio-Temporal graph transformer networks for pedestrian trajectory Prediction[C]//European Conference on Computer Vision(ECCV). Glasgow, UK: Springer, 2020: 507-523. |
21 | LIANGJ W, JIANGL, MURPHYK, et al. The garden of forking paths: Towards multi-future trajectory prediction[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 10505-10515. |
22 | 丰艳, 张甜甜, 王传旭. 基于伪3D残差网络与交互关系建模的群组行为识别方法[J]. 电子学报, 2020, 48(7): 1269-1275. |
FENGY, ZHANGT T, WANGC X. Group activity recognition method based on pseudo 3D residual network and interaction modeling[J]. Acta Electronica Sinica, 2020, 48(7): 1269-1275. (in Chinese) | |
23 | ZHAOT Y, XUY F, MONFORTM, et al. Multi-agent tensor fusion for contextual trajectory prediction[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, CA, USA: IEEE, 2019: 12118-12126. |
24 | 程媛, 迟荣华, 黄少滨, 等. 基于非参数密度估计的不确定轨迹预测方法[J]. 自动化学报, 2019, 45(4): 787-798. |
CHENGY, CHIR H, HUANGS B, et al. Uncertain trajectory prediction method using non-parametric density estimation[J]. Acta Automatica Sinica, 2019, 45(4): 787-798. (in Chinese) | |
25 | ZHANGL D, SHEQ, GUOP. Stochastic trajectory prediction with social graph network[EB/OL]. (2019-07-24)[2021-12]. . |
26 | SUNJ H, JIANGQ H, LUC W. Recursive social behavior graph for trajectory prediction[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 657-666. |
27 | SUH, DONGY P, ZHUJ, et al. Crowd scene understanding with coherent recurrent neural networks[C]//Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). New York, NY, USA: IJCAI, 2016: 3469-3476. |
28 | VEMULAA, MUELLINGK, OH J. Social attention: Modeling attention in human crowds[C]//2018 IEEE International Conference on Robotics and Automation(ICRA). Brisbane, QLD, Australia: IEEE, 2018: 4601-4607. |
29 | LIUQ, WUS, WANGL, et al. Predicting the next location: A recurrent model with spatial and temporal contexts[C]//30th AAAI Conference on Artificial Intelligence(AAAI). Phoenix, AZ, USA: AAAI, 2016: 194-200. |
30 | ZHANGP, OUYANGW L, ZHANGP F, et al. SR-LSTM: State refinement for LSTM towards pedestrian trajectory prediction[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, CA, USA: IEEE, 2019: 12077-12086. |
31 | YIS, LIH S, WANGX G. Pedestrian behavior understanding and prediction with deep neural networks[C]//14th European Conference on Computer Vision(ECCV). Amsterdam, Netherlands: Springer, 2016: 263-279. |
32 | YAGIT, MANGALAMK, YONETANIR, et al. Future person localization in first-person videos[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Salt Lake City, UT, USA: IEEE, 2018: 7593-7602. |
33 | LIJ C, MAH B, TOMIZUKAM. Conditional generative neural system for probabilistic trajectory prediction[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). Macau, China: IEEE, 2019: 6150-6156. |
34 | SHAFIEEN, PADIRT, ELHAMIFARE. Introvert: Human trajectory prediction via conditional 3D attention[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 16810-16820. |
[1] | 吴翼腾, 刘伟, 于溆乔. 基于参数差异假设的图卷积网络对抗性攻击[J]. 电子学报, 2023, 51(2): 330-341. |
[2] | 毛国君, 王者浩, 黄山, 王翔. 基于剪边策略的图残差卷积深层网络模型[J]. 电子学报, 2022, 50(9): 2205-2214. |
[3] | 江兵兵, 何文达, 吴兴宇, 项俊浩, 洪立斌, 盛伟国. 基于自适应图学习的半监督特征选择[J]. 电子学报, 2022, 50(7): 1643-1652. |
[4] | 裴炤, 邱文涛, 王淼, 马苗, 张艳宁. 基于Transformer动态场景信息生成对抗网络的行人轨迹预测方法[J]. 电子学报, 2022, 50(7): 1537-1547. |
[5] | 赵俊男, 佘青山, 孟明, 陈云. 基于多流空间注意力图卷积SRU网络的骨架动作识别[J]. 电子学报, 2022, 50(7): 1579-1585. |
[6] | 桑海峰, 陈旺兴, 王海峰, 王金玉. 基于多模式时空交互的行人轨迹预测模型[J]. 电子学报, 2022, 50(11): 2806-2812. |
[7] | 李志欣, 孙亚茹, 唐素勤, 张灿龙, 马慧芳. 双路注意力引导图卷积网络的关系抽取[J]. 电子学报, 2021, 49(2): 315-323. |
[8] | 曹燕, 董一鸿, 邬少清, 陈华辉, 钱江波, 潘善亮. 动态网络表示学习研究进展[J]. 电子学报, 2020, 48(10): 2047-2059. |
[9] | 尤洪峰, 田生伟, 禹龙, 吕亚龙. 基于Word Embedding的遥感影像检测分割[J]. 电子学报, 2020, 48(1): 75-83. |
[10] | 乔少杰, 韩楠, 朱新文, 舒红平, 郑皎凌, 元昌安. 基于卡尔曼滤波的动态轨迹预测算法[J]. 电子学报, 2018, 46(2): 418-423. |
[11] | 杜馨瑜;李永杰;尧德中;李朝义;. 基于图像导数框架和非负稀疏编码的颜色恒常计算方法[J]. 电子学报, 2012, 40(1): 179-183. |
[12] | 谭树人;林高鹏;张茂军. 基于卡尔曼滤波的折反射全向图像目标跟踪[J]. 电子学报, 2011, 39(2): 464-470. |
[13] | 王 钧;李 军;陈慧中;景 宁. 一种应急条件对地观测卫星成像调度方法[J]. 电子学报, 2008, 36(9): 1715-1722. |
[14] | 赵 巍;黄晶晶;田 斌. 基于感受野模型的图像融合算法研究[J]. 电子学报, 2008, 36(9): 1665-1669. |
[15] | 练秋生, 孔令富. 具有多方向选择性的小波构造[J]. 电子学报, 2005, 33(10): 1905-1909. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||