The corresponding matched images are hard to obtain by the convetional scene matching localization method applied in UAV (Unmanned Aerial Vehicle) at a low altitude because of the small viewing coverage and the big capturing angle difference comparing with the satellite images.We propose a localization method based on instant dense 3D reconstruction.Firstly,we use a fast SLAM (Simultaneous Localization And Mapping) method to retrieve camera poses of image sequence captured by UAV.Secondly,cooperative denoising and optimization algorithm across multiple key frames is applied to obtain dense depth map and dense point cloud.Thirdly,a virtual view,the angle of which is similar to that of satellite,is generated by iterative optimizing method.Finally,we estimate the position of UAV by correspondence between the satellite map and the previous generated virtual view.Since the dense 3D point cloud integrates the information of multiple aerial images with small field of view and the viewing angles of some generated virtual views are close to those of satellite images,the proposed method provides a higher success rate and accuracy for localization.Experimental results illustrate the effectiveness and applicability of the proposed framework.
[1] Majdik L,Verda D,Albers-schoenberg Y,et al.Air-ground matching:appearance-based GPS-denied urban localization of micro aerial vehicles[J].Journal of Field Robotics,2015,32(7):1015-1039.
[2] Gálvez-López D,Tardós J.Bags of binary words for fast place recognition in image sequences[J].IEEE Transactions on Robotics,2012,28(5):1188-1197.
[3] Cummins M,Newman P.Appearance-only SLAM at large scale with FAB-MAP2.0[J].International Journal of Robotic Research,2011,30(9):1100-1123.
[4] Morel J,Yu G.ASIFT:A new framework for fully affine invariant image comparison[J].SIAM Journal.Imaging Sciences,2009,2(2):438-469.
[5] Bansal M,Daniilidis K,Sawhney H.Ultra-wide baseline facade matching for geo-localization[A].Proceedings of the European Conference on Computer Vision Workshops and Demonstrations[C].Florence,Italy:IEEE,2012.175-186.
[6] Bansal M,Sawhney H,Cheng H,et al.Geo-localization of street views with aerial image databases[A].Proceedings of the 19th International Conference on Multimedia[C].Scottsdale,AZ,USA:IEEE,2011.1125-1128.
[7] 邸男,李桂菊,魏雅娟.采用SIFT的末制导图像匹配技术[J].红外与激光工程,2011,40(8):1589-1593. Di N,Li G,Wei Y.Image matching technology based on SIFT for terminal-guiding system[J].Infrared and Laser Engineering,2011,40(8):1589-1593.(in Chinese)
[8] 陈冰,赵亦工,李欣.基于快速鲁棒性特征的景象匹配[J].系统工程与电子技术,2009,31(11):2714-2718. Bing C,Zhao Y G,Xin L I.Scene matching based on speeded up robust features[J].Systems Engineering and Electronics,2009,31(11):2714-2718.(in Chinese)
[9] Klein G,Murray W.Parallel tracking and mapping for small AR workspaces[A].Proceedings of the Sixth IEEE/ACM International Symposium on Mixed and Augmented Reality[C].Nara,Japan:IEEE,2007.225-234.
[10] Newcombe R,Lovegrove S,Davison J.DTAM:dense tracking and mapping in real-time[A].Proceedings of the IEEE International Conference on Computer Vision[C].Barcelona,Spain:IEEE,2011.2320-2327.
[11] Engel J,Sch ps T,Cremers D.LSD-SLAM:large-scale direct monocular SLAM[A].Proceedings of the 13th European Conference on Computer Vision[C].Zurich,Switzerland:IEEE,2014.834-849.
[12] Forster C,Pizzoli M,Scaramuzza D.SVO:fast semi-direct monocular visual odometry[A].Proceedings of the 2014 IEEE International Conference on Robotics and Automation[C].Hong Kong,China:IEEE,2014.15-22.
[13] Davison A,Reid I,Molton N,et al.MonoSLAM:real-time single camera SLAM[J].IEEE Transactions,2007,PAMI-29(6):1052-1067.
[14] Murray R,Li Z,Sastry S.A Mathematical Introduction to Robotics Manipulation[M].Florida:CRC Press,1994.36-40.
[15] Absil P,Mahony R,Sepulchre R.Optimization Algorithms on Matrix Manifolds[M].Princeton:Princeton University Press,2009.5-12.
[16] Furakawa Y,Ponce J.Accurate,dense,and robust multiview stereopsis[J].IEEE Transactions,2010,PAMI-32(8):1362-1376.
[17] Wu C.Towards linear-time incremental structure from motion[A].Proceedings of the 2013 International Conference on 3D Vision[C].Seattle USA:IEEE,2013.127-134.