
基于改进关键帧选取策略的快速PL-SLAM算法
Fast PL-SLAM Algorithm Based on Improved Keyframe Extraction Strategy
针对PL-SLAM(Point and Line Simultaneous Localization And Mapping)算法在稠密场景下同时使用点线特征造成特征计算冗余,以及曲线运动时漏选关键帧等问题,提出一种基于改进关键帧选取策略的快速PL-SLAM算法(Improved keyframe extraction strategy-based Fast PL-SLAM algorithm, IFPL-SLAM).该算法引入一种基于信息熵引导的位姿跟踪决策,使用信息熵评价优先提取的特征点,依据评价结果决策点线特征的融合使用方式,避免了在纹理稠密场景下点线特征同时使用造成数据冗余,提高了算法的实时性;与此同时,为避免曲线运动时漏选关键帧,采用逆向索引关键帧选取策略补充在曲线运动中漏选的关键帧,提高了闭环的准确率和定位精度.在公开的KITTI数据集和TUM数据集中进行测试,测试结果表明本文算法的运行时间与PL-SLAM算法相比减少了16.0%,绝对轨迹误差相比于PL-SLAM算法缩小了23.4%,表现出了良好的构图能力.
To address the problems that PL-SLAM(Point and Line Simultaneous Localization And Mapping) algorithm uses point and line features at the same time in dense scenes, which cause redundancy in feature calculation, and misses keyframe selection during curve motion, an IFPL-SLAM(Improved keyframe extraction strategy-based Fast PL-SLAM) algorithm is proposed. The algorithm introduces an information entropy-guided bit tracking decision, evaluates the priority extracted feature points using information entropy, and decides the fusion of point and line features based on the evaluation results, which avoids data redundancy caused by the simultaneous use of point and line features in dense texture scenes and improves the real-time performance of the algorithm. At the same time, in order to avoid missing keyframes during curve motion, the inverse index keyframe selection strategy is used to supplement the keyframes missed during curve motion, which improves the accuracy and positioning precision of the closed loop. The test results on the publicly available KITTI dataset and TUM dataset show that the running time of the algorithm proposed is reduced by 16.0% compared to the PL-SLAM algorithm, and the localization accuracy is increased by 23.4% compared to PL-SLAM, which exhibits a good mapping capability.
点线特征 / 位姿估计 / 信息熵 / 关键帧 / 逆向索引 / 同时定位与地图构建 {{custom_keyword}} /
point and line features / position estimation / information entropy / keyframe / inverse index / simultaneous localization and mapping {{custom_keyword}} /
|
---|
输入:连续帧图像 输出:当前机器人的位姿估计 参数:点特征信息熵阈值 FOR new ORB算法提取图像点特征 计算点特征信息熵评分: IF 图像点特征匹配,估计机器人的当前位姿 ELSE 调用图像中的线特征,LSD算法提取图像线特征并按条件筛选和拼接断线特征 点、线特征融合的特征匹配,估计机器人的当前位姿 END IF END FOR |
|
---|
输入:连续帧图像 输出:关键帧 参数:位移阈值 FOR new 计算 IF 插入新的关键帧, k ELSEIF 插入新的关键帧, k 以当前关键帧 普通帧 IF 插入新的关键帧, k ELSE return null END IF END IF END FOR |
表1 TUM数据集下3种算法特征匹配准确率和特征提取时间对比 |
TUM sequence | 特征匹配准确率 (%) | 特征提取时间 (ms) | ||||
---|---|---|---|---|---|---|
ORB-SLAM2 | PL-SLAM | IFPL-SLAM | ORB-SLAM2 | PL-SLAM | IFPL-SLAM | |
fr2/360-hemisphere | 84.5 | 87.5 | 90.8 | 38.5 | 54.2 | 45.6 |
fr3/large/cabinet | 76.5 | 85.7 | 89.4 | 40.6 | 57.3 | 50.1 |
fr1/plant | 85.6 | 86.3 | 88.9 | 42.3 | 57.8 | 47.2 |
fr2/xyz | 86.7 | 88.1 | 90.1 | 40.5 | 60.1 | 49.7 |
平均值 | 83.3 | 86.9 | 89.8 | 40.5 | 57.4 | 48.2 |
表2 3种算法在KITTI数据集4个序列下的结果对比 |
KITTI sequence | 提取的关键帧数量(帧) | 单帧图像的处理时间(ms) | 轨迹均方根误差RMSE/(m) | ||||||
---|---|---|---|---|---|---|---|---|---|
ORB- SLAM2 | PL- SLAM | IFPL- SLAM | ORB- SLAM2 | PL- SLAM | IFPL- SLAM | ORB- SLAM2 | PL- SLAM | IFPL- SLAM | |
01 | 373 | 385 | 394 | 125 | 172 | 135 | 12.73 | 9.78 | 7.96 |
10 | 495 | 487 | 547 | 114 | 155 | 121 | 11.35 | 8.52 | 6.08 |
08 | - | 1784 | 1927 | - | 164 | 124 | - | 6.85 | 4.96 |
02 | 2097 | 2182 | 2285 | 134 | 181 | 156 | 9.27 | 7.89 | 6.31 |
平均值 | 988 | 1207 | 1288 | 124.33 | 168 | 134 | 11.12 | 8.26 | 6.33 |
表3 3种算法在TUM数据集下的平移误差和旋转误差对比 |
TUM sequence | ORB-SLAM2 | PL-SLAM | IFPL-SLAM | ||||
---|---|---|---|---|---|---|---|
平移误差(m) | 旋转误差(°) | 平移误差(m) | 旋转误差(°) | 平移误差(m) | 旋转误(°) | ||
fr3/large/cabinet | - | - | 0.059 | 2.48 | 0.032 | 1.98 | |
fr2/360-hemisphere | - | - | - | - | 0.043 | 2.04 | |
fr1/plant | 0.092 | 4.52 | 0.054 | 3.12 | 0.048 | 2.95 | |
fr2/xyz | 0.038 | 2.24 | 0.029 | 1.83 | 0.023 | 1.24 | |
平均值 | 0.065 | 3.38 | 0.047 | 2.48 | 0.037 | 2.053 |
1 |
蒋林, 向超, 朱建阳, 刘奇. 加载语义似然估计的粒子滤波重定位[J]. 电子学报, 2021, 49(2): 306-314.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
2 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
3 |
陈宝华, 邓磊, 陈志祥, 段岳圻, 周杰. 基于即时稠密三维重构的无人机视觉定位[J]. 电子学报, 2017, 45(6): 1294-1300.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
4 |
罗会兰, 陈鸿坤. 基于深度学习的目标检测研究综述[J]. 电子学报, 2020, 48(6): 1230-1239.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
5 |
陆兴鹏, 王明羽, 曹扬, 赵瑞莲, 周武, 李兆麟, 魏少军. 一种基于帧图像的动态视觉传感器样本集建模方法[J]. 电子学报, 2020, 48(8): 1457-1464.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
6 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
7 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
8 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
9 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
10 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
11 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
12 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
13 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
14 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
15 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
16 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
17 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
18 |
赵良玉, 金瑞, 朱叶青, 高凤杰. 基于点线特征融合的双目惯性SLAM算法[J].航空学报, 2021, 42(12): 325117.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
19 |
于雅楠, 卫红, 陈静. 基于局部熵的SLAM视觉里程计优化算法[J/OL]. 自动化学报. [2019-04-16].
{{custom_citation.content}}
{{custom_citation.annotation}}
|
20 |
郭小英, 李文书, 钱宇华, 白茹意, 贾春花. 可计算图像复杂度评价方法综述[J]. 电子学报, 2020, 48(4): 819-826.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
21 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
22 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
23 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
{{custom_ref.label}} |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
/
〈 |
|
〉 |