Coal Geology & Exploration
Abstract
Objective Underground coal mines commonly exhibit low illuminance, weak textures, and structuralization-induced feature degradation. These scenes lead to challenges of insufficient effective features and high mismatch rates to the visual simultaneous localization and mapping (SLAM) system, severely compromising its localization accuracy and robustness. Methods This study proposed an edge awareness-enhanced visual SLAM method. First, an edge-awareness constrained low-illuminance image enhancement module was developed. Specifically, images with clear textures and uniform illumination were obtained using the Retinex algorithm optimized using an adaptive gradient-domain guided filter. This significantly improved feature extraction performance under low and uneven lighting conditions. Second, an edge awareness-enhanced feature extraction and matching module was introduced into the visual odometry. A point and line feature fusion strategy was employed to enhance the detectability and matching accuracy of weak textures and features in structured scenes. Specifically, line features were extracted using the EDLines algorithm, while point features were extracted using the Oriented FAST and Rotated BRIEF (ORB) algorithms. Such feature extraction was followed by precise feature matching achieved using grid-based motion statistics (GMS) and ratio test matching algorithms. Finally, the proposed method, along with the ORB-SLAM2 and ORB-SLAM3 algorithms, was comprehensively verified on the TUM dataset and the dataset of the actual underground coal mine scenes, covering multiple aspects such as image enhancement, feature matching, and localization. Results and Conclusions The results indicate that on the TUM dataset, the proposed method reduced the root mean square errors (RMSEs) of absolute and relative trajectory errors by 4%‒38.46% and 8.62%‒50%, respectively compared to ORB-SLAM2 and reduced by 0‒61.68% and 3.63%‒47.05%, respectively compared to ORB-SLAM3. Experiments on the actual underground coal mine scenes revealed that the location trajectories of the proposed method were aligned with the reference trajectory of camera motion more closely. Furthermore, the proposed method effectively enhanced the accuracy and robustness of the visual SLAM system in the feature degradation scene in underground coal mines, providing a technical solution for its applications in such settings. Research on visual SLAM methods tailored for feature degradation scenes in underground coal mines holds great significance for advancing the roboticization of mobile equipment used in coal mines.
Keywords
visual SLAM, feature degradation, edge awareness, image enhancement, point and line feature fusion, TUM dataset
DOI
10.12363/issn.1001-1986.24.08.0544
Recommended Citation
MU Qi, LIANG Xin, GUO Yuanjie,
et al.
(2025)
"An edge awareness-enhanced visual SLAM method for underground coal mines,"
Coal Geology & Exploration: Vol. 53:
Iss.
3, Article 20.
DOI: 10.12363/issn.1001-1986.24.08.0544
Available at:
https://cge.researchcommons.org/journal/vol53/iss3/20
Reference
[1] 王国法,张建中,薛国华,等. 煤矿回采工作面智能地质保障技术进展与思考[J]. 煤田地质与勘探,2023,51(2):12−26.
WANG Guofa,ZHANG Jianzhong,XUE Guohua,et al. Progress and reflection of intelligent geological guarantee technology in coal mining face[J]. Coal Geology & Exploration,2023,51(2):12−26.
[2] 王海军,曹云,王洪磊. 煤矿智能化关键技术研究与实践[J]. 煤田地质与勘探,2023,51(1):44−54.
WANG Haijun,CAO Yun,WANG Honglei. Research and practice on key technologies for intelligentization of coal mine[J]. Coal Geology & Exploration,2023,51(1):44−54.
[3] CHEN Weifeng,ZHOU Chengjun,SHANG Guangtao,et al. SLAM overview:From single sensor to heterogeneous fusion[J]. Remote Sensing,2022,14(23):6033.
[4] 葛世荣,胡而已,李允旺. 煤矿机器人技术新进展及新方向[J]. 煤炭学报,2023,48(1):54−73.
GE Shirong,HU Eryi,LI Yunwang. New progress and direction of robot technology in coal mine[J]. Journal of China Coal Society,2023,48(1):54−73.
[5] 胡博妮,陈霖,徐丙立,等. 基于无人机平台的地表环境实时稠密点云生成与数字模型构建[J]. 遥感学报,2024,28(5):1206−1221.
HU Boni,CHEN Lin,XU Bingli,et al. Real–time dense point cloud generation and digital model construction of surface environment based on UAV platform[J]. National Remote Sensing Bulletin,2024,28(5):1206−1221.
[6] 高毅楠,姚顽强,蔺小虎,等. 煤矿井下多重约束的视觉SLAM关键帧选取方法[J]. 煤炭学报,2024,49(增刊1):472−482.
GAO Yinan,YAO Wanqiang,LIN Xiaohu,et al. Visual SLAM keyframe selection method with multiple constraints in underground coal mines[J]. Journal of China Coal Society,2024,49(Sup.1):472−482.
[7] HUANG Zenghua,GE Shirong,HE Yonghua,et al. Research on the intelligent system architecture and control strategy of mining robot crowds[J]. Energies,2024,17(8):1834.
[8] LI Menggang,HU Kun,LIU Yuwang,et al. A multimodal robust simultaneous localization and mapping approach driven by geodesic coordinates for coal mine mobile robots[J]. Remote Sensing,2023,15(21):5093.
[9] 薛光辉,张钲昊,张桂艺,等. 煤矿井下点云特征提取和配准算法改进与激光SLAM研究[J/OL]. 煤炭科学技术,2024:1–12 [2024-07-23]. https://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.
XUE Guanghui,ZHANG Zhenghao,ZHANG Guiyi,et al. Improvement of point cloud feature extraction and alignment algorithms and LiDAR SLAM in coal mine underground[J/OL]. Coal Science and Technology,2024:1–12 [2024-07-23]. https://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.
[10] YU Rui,FANG Xinqiu,HU Chengjun,et al. Research on positioning method of coal mine mining equipment based on monocular vision[J]. Energies,2022,15(21):8068.
[11] 王纪武,万伟鹏,尚学强,等. 基于图像增强和自适应阈值的语义视觉SLAM系统[J]. 计算机集成制造系统,2024,30(12):4217−4232.
WANG Jiwu,WAN Weipeng,SHANG Xueqiang,et al. Semantic visual SLAM system based on image enhancement and adaptive thresholding[J]. Computer Integrated Manufacturing Systems,2024,30(12):4217−4232.
[12] 龚云,颉昕宇. 基于同态滤波方法的煤矿井下图像增强技术研究[J]. 煤炭科学技术,2023,51(3):241−250.
GONG Yun,XIE Xinyu. Research on coal mine underground image recognition technology based on homomorphic filtering method[J]. Coal Science and Technology,2023,51(3):241−250.
[13] 占必超,吴一全,纪守新. 基于平稳小波变换和Retinex的红外图像增强方法[J]. 光学学报,2010,30(10):2788−2793.
ZHAN Bichao,WU Yiquan,JI Shouxin. Infrared image enhancement method based on stationary wavelet transformation and Retinex[J]. Acta Optica Sinica,2010,30(10):2788−2793.
[14] WANG Yifan,WANG Hongyu,YIN Chuanli,et al. Biologically inspired image enhancement based on Retinex[J]. Neurocomputing,2016,177:373−384.
[15] 梅英杰,宁媛,陈进军. 融合暗通道先验和MSRCR的分块调节图像增强算法[J]. 光子学报,2019,48(7):0710005.
MEI Yingjie,NING Yuan,CHEN Jinjun. Block–adjusted image enhancement algorithm combining dark channel prior with MSRCR[J]. Acta Photonica Sinica,2019,48(7):0710005.
[16] LOWE D G. Distinctive image features from scale–invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91−110.
[17] RUBLEE E,RABAUD V,KONOLIGE K,et al. ORB:An efficient alternative to SIFT or SURF[C]//2011 International Conference on Computer Vision. Barcelona:IEEE,2011.
[18] 马艾强,姚顽强. 煤矿井下移动机器人多传感器自适应融合SLAM方法[J]. 工矿自动化,2024,50(5):107−117.
MA Aiqiang,YAO Wanqiang. Multi sensor adaptive fusion SLAM method for underground mobile robots in coal mines[J]. Journal of Mine Automation,2024,50(5):107−117.
[19] 张旭辉,杨红强,白琳娜,等. 基于改进RANSAC特征提取的掘进装备视觉定位方法研究[J]. 仪器仪表学报,2022,43(12):168−177.
ZHANG Xuhui,YANG Hongqiang,BAI Linna,et al. Research on the visual positioning method of tunneling equipment based on the improved RANSAC feature extraction[J]. Chinese Journal of Scientific Instrument,2022,43(12):168−177.
[20] GROMPONE VON GIOI R,JEREMIE J,JEAN–MICHEL M,et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(4):722−732.
[21] 姚建均,李英朝,吴杨,等. 融合点线特征的视觉惯性同时定位及建图[J]. 哈尔滨工程大学学报,2024,45(4):771−778.
YAO Jianjun,LI Yingzhao,WU Yang,et al. Visual–inertia simultaneous localization and mapping based on point–and–line features[J]. Journal of Harbin Engineering University,2024,45(4):771−778.
[22] 龚坤,徐鑫,陈小庆,等. 弱纹理环境下融合点线特征的双目视觉同步定位与建图[J]. 光学精密工程,2024,32(5):752−763.
GONG Kun,XU Xin,CHEN Xiaoqing,et al. Binocular vision SLAM with fused point and line features in weak texture environment[J]. Optics and Precision Engineering,2024,32(5):752−763.
[23] BIAN Jiawang,LIN Wenyan,LIU Yun,et al. GMS:Grid–based motion statistics for fast,ultra–robust feature correspondence[J]. International Journal of Computer Vision,2020,128(6):1580−1593.
[24] 王笛,胡辽林. 基于双目视觉的改进特征立体匹配方法[J]. 电子学报,2022,50(1):157−166.
WANG Di,HU Liaolin. Improved feature stereo matching method based on binocular vision[J]. Acta Electronica Sinica,2022,50(1):157−166.
[25] CHEN Xinyu,YU Yantao. An unsupervised low–light image enhancement method for improving V–SLAM localization in uneven low–light construction sites[J]. Automation in Construction,2024,162:105404.
[26] 刘冬,于涛,丛明,等. 基于深度学习图像特征的动态环境视觉SLAM方法[J]. 华中科技大学学报(自然科学版),2024,52(6):156−163.
LIU Dong,YU Tao,CONG Ming,et al. Visual SLAM method for dynamic environment based on deep learning image features[J]. Journal of Huazhong University of Science and Technology (Natural Science Edition),2024,52(6):156−163.
[27] MUR–ARTAL R,TARDÓS J D. ORB–SLAM2:An open–source SLAM system for monocular,stereo,and RGB–D cameras[J]. IEEE Transactions on Robotics,2017,33(5):1255−1262.
[28] LAND E H. The Retinex theory of color vision[J]. Scientific American,1978,237(6):108−128.
[29] HE Kaiming,SUN Jian,TANG Xiaoou. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(6):1397−1409.
[30] KOU Fei,CHEN Weihai,WEN Changyun,et al. Gradient domain guided image filtering[J]. IEEE Transactions on Image Processing,2015,24(11):4528−4539.
[31] XU Xin,YU Zhibin. Low–light image enhancement based on Retinex theory[C]//2023 IEEE 6th International Conference on Electronic Information and Communication Technology (ICEICT). Qingdao:IEEE,2023.
[32] AKINLAR C,TOPAL C. EDLines:A real–time line segment detector with a false detection control[J]. Pattern Recognition Letters,2011,32(13):1633−1642.
[33] PUMAROLA A,VAKHITOV A,AGUDO A,et al. PL–SLAM:Real–time monocular visual SLAM with points and lines[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore:IEEE,2017.
[34] STURM J,ENGELHARD N,ENDRES F,et al. A benchmark for the evaluation of RGB–D SLAM systems[C]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura–Algarve:IEEE,2012.
[35] CAMPOS C,ELVIRA R,RODRÍGUEZ J J G,et al. ORB–SLAM3:An accurate open–source library for visual,visual–inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874−1890.
Included in
Earth Sciences Commons, Mining Engineering Commons, Oil, Gas, and Energy Commons, Sustainability Commons