• 《工程索引》(EI)刊源期刊
  • 综合性科学技术类中文核心期刊
  • 中国科技论文统计源期刊
  • 中国科学引文数据库来源期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于环境语义信息的同步定位与地图构建方法综述

李小倩 何伟 朱世强 李月华 谢天

李小倩, 何伟, 朱世强, 李月华, 谢天. 基于环境语义信息的同步定位与地图构建方法综述[J]. 工程科学学报. doi: 10.13374/j.issn2095-9389.2020.11.09.006
引用本文: 李小倩, 何伟, 朱世强, 李月华, 谢天. 基于环境语义信息的同步定位与地图构建方法综述[J]. 工程科学学报. doi: 10.13374/j.issn2095-9389.2020.11.09.006
LI Xiao-qian, HE Wei, ZHU Shi-qiang, LI Yue-hua, XIE Tian. Survey of simultaneous localization and mapping based on environmental semantic information[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2020.11.09.006
Citation: LI Xiao-qian, HE Wei, ZHU Shi-qiang, LI Yue-hua, XIE Tian. Survey of simultaneous localization and mapping based on environmental semantic information[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2020.11.09.006

基于环境语义信息的同步定位与地图构建方法综述

doi: 10.13374/j.issn2095-9389.2020.11.09.006
基金项目: 国家重点研发计划资助项目(2018AAA0102703);科工局稳定支持项目(HTKJ2019KL502005);第67批中国博士后科学基金面上资助项目(HTKJ2019KL502005)
详细信息
    通讯作者:

    E-mail: liyh@zhejianglab.com

  • 中图分类号: TP24

Survey of simultaneous localization and mapping based on environmental semantic information

More Information
  • 摘要: 同步定位与地图构建技术(SLAM)是当前机器人领域的重要研究热点,传统的SLAM技术虽然在实时性方面已经达到较高的水平,但在定位精度和鲁棒性等方面还存在较大缺陷,所构建的环境地图虽然一定程度上满足了机器人的定位需要,但不足以支撑机器人自主完成导航、避障等任务,交互性能不足。随着深度学习技术的发展,利用深度学习方法提取环境语义信息,并与SLAM技术结合,越来越受到学者的关注。本文综述了环境语义信息应用到同步定位与地图构建领域的最新研究进展,重点介绍和总结了语义信息与传统视觉SLAM在系统定位和地图构建方面结合的突出研究成果,并对传统视觉SLAM算法与语义SLAM算法做了深入的对比研究。最后,展望了语义SLAM研究的发展方向。
  • 图  1  V-SLAM系统框架

    Figure  1.  Architecture of the V-SLAM system

    表  1  面向场景的语义地图与面向对象的语义地图对比

    Table  1.   Comparison of scene-oriented semantic maps with object-oriented semantic maps

    Scene-oriented semantic maps
    (SemanticFusion[60])
    Object-oriented semantic maps
    (MaskFusion[61])
    Input
    Ground Truth
    Output
    Semanticmap
    下载: 导出CSV

    表  2  传统SLAM算法与语义SLAM算法对比

    Table  2.   Comparison of traditional SLAM algorithm and semantic SLAM algorithm

    NameTraditional SLAMSemantic SLAM
    Data scaleSmallLarge
    Information utilizationLowHigh
    GeneralizationWeakStrong
    VisualizationLowHigh
    LatencyLowHigh
    RobustnessWeakStrong
    Path planningWeak intelligenceStrong intelligence
    Application sceneStatic, strong texture, and unchanged lightingStatic or dynamic, texture, and lighting unlimited
    下载: 导出CSV
  • [1] Smith R C, Cheeseman P. On the representation and estimation of spatial uncertainty. Int J Rob Res, 1986, 5(4): 56 doi: 10.1177/027836498600500404
    [2] Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans Rob, 2016, 32(6): 1309 doi: 10.1109/TRO.2016.2624754
    [3] Davison A J, Reid I D, Molton N D, et al. MonoSLAM: real-time single camera SLAM. IEEE Trans Pattern Anal Mach Intell, 2007, 29(6): 1052 doi: 10.1109/TPAMI.2007.1049
    [4] 刘强, 段富海, 桑勇, 等. 复杂环境下视觉SLAM闭环检测方法综述. 机器人, 2019, 41(1):112

    Liu Q, Duan F H, Sang Y, et al. A survey of loop-closure detection method of visual SLAM in complex environments. Robot, 2019, 41(1): 112
    [5] Hess W, Kohler D, Rapp H, et al. Real-time loop closure in 2D LIDAR SLAM // Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, 2016: 1271
    [6] Chen X, Läbe T, Milioto A, et al. OverlapNet: Loop closing for LiDAR-based SLAM // Proceeding of Robotics: Science and Systems (RSS). Corvalis, 2020
    [7] 刘浩敏, 章国锋, 鲍虎军. 基于单目视觉的同时定位与地图构建方法综述. 计算机辅助设计与图形学学报, 2016, 28(6):855 doi: 10.3969/j.issn.1003-9775.2016.06.001

    Liu H M, Zhang G F, Bao H J. A survey of monocular simultaneous localization and mapping. J Computer-Aided Des Comput Graph, 2016, 28(6): 855 doi: 10.3969/j.issn.1003-9775.2016.06.001
    [8] Taketomi T, Uchiyama H, Ikeda S. Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Trans Comput Vision Appl, 2017, 9(1): 16 doi: 10.1186/s41074-017-0027-2
    [9] Fuentes-Pacheco J, Ruiz-Ascencio J, Rendón-Mancha J M. Visual simultaneous localization and mapping: a survey. Artif Intell Rev, 2015, 43(1): 55 doi: 10.1007/s10462-012-9365-8
    [10] 赵洋, 刘国良, 田国会, 等. 基于深度学习的视觉SLAM综述. 机器人, 2017, 39(6):889

    Zhao Y, Liu G L, Tian G H, et al. A survey of visual SLAM based on deep learning. Robot, 2017, 39(6): 889
    [11] Xia L L, Cui J S, Shen R, et al. A survey of image semantics-based visual simultaneous localization and mapping: application-oriented solutions to autonomous navigation of mobile robots. Int J Adv Rob Syst, 2020, 17(3): 4158
    [12] 丁文东, 徐德, 刘希龙, 等. 移动机器人视觉里程计综述. 自动化学报, 2018, 44(3):385

    Ding W D, Xu D, Liu X L, et al. Review on visual odometry for mobile robots. Acta Autom Sin, 2018, 44(3): 385
    [13] 卫恒, 吕强, 林辉灿, 等. 多机器人SLAM后端优化算法综述. 系统工程与电子技术, 2017, 39(11):2553 doi: 10.3969/j.issn.1001-506X.2017.11.24

    Wei H, Lü Q, Lin H C, et al. Survey on multi-robot SLAM back-end optimization algorithm. Syst Eng Electron, 2017, 39(11): 2553 doi: 10.3969/j.issn.1001-506X.2017.11.24
    [14] Qin T, Li P L, Shen S J. Relocalization, global optimization and map merging for monocular visual-inertial SLAM // 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, 2018: 1197
    [15] Shotton J, Glocker B, Zach C, et al. Scene coordinate regression forests for camera relocalization in RGB-D images // Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, 2013: 2930
    [16] Strasdat H, Montiel J M M, Davison A J. Scale drift-aware large scale monocular SLAM // Robotics: science and Systems VI. Zaragoza, 2010: 7
    [17] Frost D, Prisacariu V, Murray D. Recovering stable scale in monocular SLAM using object-supplemented bundle adjustment. IEEE Trans Rob, 2018, 34(3): 736 doi: 10.1109/TRO.2018.2820722
    [18] Felzenszwalb P F, Girshick R B, McAllester D, et al. Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell, 2010, 32(9): 1627 doi: 10.1109/TPAMI.2009.167
    [19] Sucar E, Hayet J B. Bayesian scale estimation for monocular SLAM based on generic object detection for correcting scale drift // 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, 2018: 5152
    [20] Bowman S L, Atanasov N, Daniilidis K, et al. Probabilistic data association for semantic SLAM // 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore, 2017: 1722
    [21] Lianos K N, Schonberger J L, Pollefeys M, et al. VSO: Visual semantic odometry // Proceedings of the European Conference on Computer Vision (ECCV). Munich, 2018: 234
    [22] Mur-Artal R, Tardos J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans Rob, 2017, 33(5): 1255 doi: 10.1109/TRO.2017.2705103
    [23] Alismail H, Browning B, Lucey S. Photometric bundle adjustment for vision-based SLAM // Asian Conference on Computer Vision. Taipei, 2016: 324
    [24] Bavle H, Manthe S, de la Puente P, et al. Stereo visual odometry and semantics based localization of aerial robots in indoor environments // 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, 2018: 1018
    [25] Nicholson L, Milford M, Sünderhauf N. QuadricSLAM: dual quadrics from object detections as landmarks in object-oriented SLAM. IEEE Rob Autom Lett, 2019, 4(1): 1 doi: 10.1109/LRA.2018.2866205
    [26] Yang S C, Scherer S. CubeSLAM: monocular 3D object SLAM. IEEE Trans Rob, 2019, 35(4): 925 doi: 10.1109/TRO.2019.2909168
    [27] Yang S C, Scherer S. Monocular object and plane SLAM in structured environments. IEEE Rob Autom Lett, 2019, 4(4): 3145 doi: 10.1109/LRA.2019.2924848
    [28] Yang S C, Song Y, Kaess M, et al. Pop-up SLAM: semantic monocular plane SLAM for low-texture environments // 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon, 2016: 1222
    [29] Lin D H, Fidler S, Urtasun R. Holistic scene understanding for 3D object detection with RGBD cameras // Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, 2013: 1417
    [30] Gawel A, Don C D, Siegwart R, et al. X-View: graph-based semantic multi-view localization. IEEE Rob Autom Lett, 2018, 3(3): 1687 doi: 10.1109/LRA.2018.2801879
    [31] Tsintotas K A, Bampis L, Gasteratos A. Assigning visual words to places for loop closure detection // 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, 2018: 5979
    [32] 张括嘉, 张云洲, 吕光浩, 等. 基于局部语义拓扑图的视觉SLAM闭环检测. 机器人, 2019, 41(5):649

    Zhang K J, Zhang Y Z, Lü G H, et al. Loop closure detection based on local semantic topology for visual SLAM system. Robot, 2019, 41(5): 649
    [33] Redmon J, Farhadi A. YOLOv3: an incremental improvement[J/OL]. arXiv preprint (2018-04-08)[2020-11-09]. https://arxiv.org/abs/1804.02767.
    [34] Engel J, Schöps T, Cremers D. LSD-SLAM: large-scale direct monocular SLAM // European Conference on Computer Vision. Zurich, 2014: 834
    [35] Ganti P, Waslander S L. Visual SLAM with network uncertainty informed feature selection [J/OL]. arXiv preprint (2019-08-26)[2020-11-09]. https://arxiv.org/abs/1811.11946v1.
    [36] 薛琳, 黄南天, 赵树野, 等. 基于条件互信息的低冗余短期负荷预测特征选择. 东北电力大学学报, 2019, 39(2):30

    Xue L, Huang N T, Zhao S Y, et al. Low redundancy feature selection using conditional mutual information for short-term load forecasting. J Northeast Dianli Univ Nat Sci Ed, 2019, 39(2): 30
    [37] Qin T, Chen T Q, Chen Y L, et al. AVP-SLAM: semantic visual mapping and localization for autonomous vehicles in the parking lot [J/OL]. arXiv preprint (2020-07-08)[ 2020-11-09]. https://arxiv.org/abs/2007.01813.
    [38] Stenborg E, Toft C, Hammarstrand L. Long-term visual localization using semantically segmented images // 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, 2018: 6484
    [39] Schönberger J L, Pollefeys M, Geiger A, et al. Semantic visual localization // Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 6896
    [40] Naseer T, Oliveira G L, Brox T, et al. Semantics-aware visual localization under challenging perceptual conditions // 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore, 2017: 2614
    [41] Liang H J, Sanket N J, Fermüller C, et al. SalientDSO: bringing attention to direct sparse odometry. IEEE Trans Autom Sci Eng, 2019, 16(4): 1619 doi: 10.1109/TASE.2019.2900980
    [42] Engel J, Koltun V, Cremers D. Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell, 2018, 40(3): 611 doi: 10.1109/TPAMI.2017.2658577
    [43] Han S Q, Xi Z H. Dynamic scene semantics SLAM based on semantic segmentation. IEEE Access, 2020, 8: 43563 doi: 10.1109/ACCESS.2020.2977684
    [44] Cui L Y, Ma C W. SDF-SLAM: semantic depth filter SLAM for dynamic environments. IEEE Access, 2020, 8: 95301 doi: 10.1109/ACCESS.2020.2994348
    [45] Yu C, Liu Z X, Liu X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments // 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, 2018: 1168
    [46] Bescos B, Fácil J M, Civera J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Rob Autom Lett, 2018, 3(4): 4076 doi: 10.1109/LRA.2018.2860039
    [47] Brasch N, Bozic A, Lallemand J, et al. Semantic monocular SLAM for highly dynamic environments // 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, 2018: 393
    [48] Cui L Y, Ma C Q. SOF-SLAM: a semantic visual SLAM for dynamic environments. IEEE Access, 2019, 7: 166528 doi: 10.1109/ACCESS.2019.2952161
    [49] Badrinarayanan V, Kendall A, Cipolla R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell, 2017, 39(12): 2481 doi: 10.1109/TPAMI.2016.2644615
    [50] Wang S, Clark R, Wen H K, et al. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. Int J Rob Res, 2018, 37(4-5): 513 doi: 10.1177/0278364917734298
    [51] Vasudevan S, Gächter S, Nguyen V, et al. Cognitive maps for mobile robots—an object based approach. Rob Autonom Syst, 2007, 55(5): 359 doi: 10.1016/j.robot.2006.12.008
    [52] Galindo C, Saffiotti A, Coradeschi S, et al. Multi-hierarchical semantic maps for mobile robotics // 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. Edmonton, 2005: 2278
    [53] Lai K, Bo L F, Fox D. Unsupervised feature learning for 3D scene labeling // 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, 2014: 3050
    [54] Mozos O M, Triebel R, Jensfelt P, et al. Supervised semantic labeling of places using information extracted from sensor data. J Rob Autonom Syst, 2007, 55(5): 391 doi: 10.1016/j.robot.2006.12.003
    [55] Kundu A, Li Y, Dellaert F, et al. Joint semantic segmentation and 3D reconstruction from monocular video // European Conference on Computer Vision. Zurich, 2014: 703
    [56] Hermans A, Floros G, Leibe B. Dense 3D semantic mapping of indoor scenes from RSB-D images // 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, 2014: 2631
    [57] Civera J, Gálvez-López D, Riazuelo L, et al. Towards semantic SLAM using a monocular camera // 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, 2011: 1277
    [58] Gálvez-López D, Salas M, Tardós J D, et al. Real-time monocular object SLAM. Rob Autonom Syst, 2016, 75: 435 doi: 10.1016/j.robot.2015.08.009
    [59] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: optimal speed and accuracy of object detection [J/OL]. arXiv preprint (2020-04-23)[2020-11-09]. https://arxiv.org/abs/2004.10934.
    [60] McCormac J, Handa A, Davison A, et al. SemanticFusion: dense 3D semantic mapping with convolutional neural networks // 2017 IEEE International Conference on Robotics and automation (ICRA). Singapore, 2017: 4628
    [61] Runz M, Buffier M, Agapito L. MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects // 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Munich, 2018: 10
    [62] Ma L N, Stückler J, Kerl C, et al. Multi-view deep learning for consistent semantic mapping with RGB-D cameras // 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, 2017: 598
    [63] Xiang Y, Fox D. DA-RNN: semantic mapping with data associated recurrent neural networks [J/OL]. arXiv preprint (2017-05-30)[2020-11-09]. https://arxiv.org/abs/1703.03098v2.
    [64] Izadi S, Kim D, Hilliges O, et al. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera // Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. Santa Barbara, 2011: 559
    [65] Mao M Y, Zhang H W, Li S M, et al. SEMANTIC-RTAB-MAP (SRM): a semantic SLAM system with CNNs on depth images. Math Found Comput, 2019, 2(1): 29 doi: 10.3934/mfc.2019003
    [66] Labbe M, Michaud F. Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans Rob, 2013, 29(3): 734 doi: 10.1109/TRO.2013.2242375
    [67] Redmon J, Farhadi A. YOLO9000: better, faster, stronger // Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, 2017: 6517
    [68] Cui X, Lu C G, Wang J X. 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment. IEEE Access, 2020, 8: 67179 doi: 10.1109/ACCESS.2020.2983488
    [69] Rosinol A, Abate M, Chang Y, et al. Kimera: an open-source library for real-time metric-semantic localization and mapping // 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, 2020: 1689
    [70] Webb A M, Brown G, Luján M. ORB-SLAM-CNN: lessons in adding semantic map construction to feature-based SLAM // Annual Conference Towards Autonomous Robotic Systems. London, 2019: 221
    [71] Ehlers S F G, Stuede M, Nuelle K, et al. Map management approach for SLAM in large-scale indoor and outdoor areas // 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, 2020: 9652
    [72] Sunderhauf N, Pham T T, Latif Y, et al. Meaningful maps with object-oriented semantic mapping // 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, 2017: 5079
    [73] Liu W, Anguelov D, Erhan D, et al. SSD: single shot multibox detector // European Conference on Computer Vision. Amsterdam, 2016: 21
    [74] McCormac J, Clark R, Bloesch M, et al. Fusion++: volumetric object-level SLAM // 2018 International Conference on 3D Vision (3DV). Verona, 2018: 32
    [75] He K M, Gkioxari G, Dollár P, et al. Mask R-CNN // Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, 2017: 2961
    [76] Hoang D C, Stoyanov T, Lilienthal A J. Object-RPE: dense 3D reconstruction and pose estimation with convolutional neural networks for warehouse robots // 2019 European Conference on Mobile Robots (ECMR). Prague, 2019: 1
    [77] Whelan T, Leutenegger S, Salas-Moreno R, et al. ElasticFusion: dense SLAM without a pose graph // Robotics: Science and Systems, Rome, 2015
    [78] Hoang D C, Lilienthal A J, Stoyanov T. Panoptic 3D mapping and object pose estimation using adaptively weighted semantic information. IEEE Rob Autom Lett, 2020, 5(2): 1962 doi: 10.1109/LRA.2020.2970682
    [79] Li W, Gu J H, Chen B W, et al. Incremental instance-oriented 3D semantic mapping via RGB-D cameras for unknown indoor scene. J Discrete Dyn Nat Soc, 2020, 2020: 2528954
    [80] Hosseinzadeh M, Latif Y, Pham T, et al. Structure aware SLAM using quadrics and planes // Asian Conference on Computer Vision. Perth, 2018: 410
    [81] Hosseinzadeh M, Li K J, Latif Y, et al. Real-time monocular object-model aware sparse SLAM // 2019 International Conference on Robotics and Automation (ICRA). Montreal, 2019: 7123
    [82] Rünz M, Agapito L. Co-fusion: real-time segmentation, tracking and fusion of multiple objects // 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore, 2017: 4471
    [83] Keller M, Lefloch D, Lambers M, et al. Real-time 3D reconstruction in dynamic scenes using point-based fusion // 2013 International Conference on 3D Vision-3DV 2013. Seattle, 2013: 1
    [84] Xu B B, Li W B, Tzoumanikas D, et al. MID-Fusion: octree-based object-level multi-instance dynamic SLAM // 2019 International Conference on Robotics and Automation (ICRA). Montreal, 2019: 5231
    [85] Oberlander J, Uhl K, Zollner J M, et al. A region-based SLAM algorithm capturing metric, topological, and semantic properties // 2008 IEEE International Conference on Robotics and Automation. Pasadena, 2008: 1886
    [86] Kostavelis I, Gasteratos A. Semantic mapping for mobile robotics tasks: a survey. Rob Autonom Syst, 2015, 66: 86 doi: 10.1016/j.robot.2014.12.006
    [87] Luo R C, Chiou M. Hierarchical semantic mapping using convolutional neural networks for intelligent service robotics. IEEE Access, 2018, 6: 61287 doi: 10.1109/ACCESS.2018.2873597
    [88] Xie L H, Markham A, Trigoni N. SnapNav: learning mapless visual navigation with sparse directional guidance and visual reference // 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, 2020: 1682
    [89] Wang X, Huang Q Y, Celikyilmaz A, et al. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation // Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, 2019: 6622
    [90] Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural SLAM [J/OL]. arXiv preprint (2020-04-10)[2020-11-09]. https://arxiv.org/abs/2004.05155.
    [91] Chaplot D S, Salakhutdinov R, Gupta A, et al. Neural topological SLAM for visual navigation // Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, 2020: 12872
    [92] Li B Y, Zou D P, Sartori D, et al. TextSLAM: visual SLAM with planar text features // 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, 2020: 2102
    [93] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation // 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, 2014: 580
    [94] Masci J, Meier U, Cireşan D, et al. Stacked convolutional auto-encoders for hierarchical feature extraction // International Conference on Artificial Neural Networks. Espoo, 2011: 52
    [95] Handa A, Bloesch M, Pătrăucean V, et al. gvnn: neural network library for geometric computer vision // European Conference on Computer Vision. Amsterdam, 2016: 67
    [96] Wang K, Lin Y M, Wang L W, et al. A unified framework for mutual improvement of SLAM and semantic segmentation // 2019 International Conference on Robotics and Automation (ICRA). Montreal, 2019: 5224
    [97] Zhi S F, Bloesch M, Leutenegger S, et al. SceneCode: monocular dense semantic reconstruction using learned encoded scene representations // Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, 2019: 11768
    [98] Yamauchi B. A frontier-based approach for autonomous exploration // Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'. Monterey, 1997: 146
    [99] Dai A N, Papatheodorou S, Funk N, et al. Fast frontier-based information-driven autonomous exploration with an MAV [J/OL]. arXiv preprint (2020-02-13)[2020-11-09]. https://arxiv.org/abs/2002.04440.
  • 加载中
计量
  • 文章访问数:  362
  • HTML全文浏览量:  97
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-11-09
  • 网络出版日期:  2020-12-22

目录

    /

    返回文章
    返回