Coal Geology & Exploration
Abstract
Background Precise positioning of tunnel boring machines (TBMs) in underground coal mines plays a fundamental role in the automated and intelligent guidance and control of fully mechanized heading face. However, traditional visual positioning methods show limited application effects in underground roadways due to their narrow and enclosed spaces, insufficient illumination, and sparse textures. This study proposed a visual positioning method for TBMs in underground coal mines based on anchor net features.Methods A three-stream depthwise separable convolutional neural network (TSCR-NET) for image enhancement was employed to estimate the reflection, illumination, and noise in images individually. Through illumination adjustment while suppressing noise, images with uniform illumination and clear textures were obtained. This contributed to enhanced adaptability of the visual positioning system under complex illumination conditions. An extraction and matching method for anchor net line features was designed. This method enhanced the extraction capacity using the edge drawing lines (EDLines) with adaptive thresholding and improved the matching accuracy using the structural similarity index measure (SSIM). A pose estimation model with minimized reprojection errors of line features was constructed. In combination with pose graph optimization, this model enabled precise TBM positioning. Furthermore, an experimental platform was established. Accordingly, experiments were designed for quantitative analyses of image enhancement, line feature processing, and positioning performance. Results and Conclusions The results indicate that the TSCR-NET yielded higher peak signal-to-noise ratio (PSNR) and SSIM values compared to the multi-scale retinex with color restoration (MSRCR) and zero-reference deep curve estimation (Zero-DCE) algorithms. The line feature processing method designed in this study outperformed traditional algorithms in the quantity of extracted features and matching accuracy, laying a solid foundation for subsequent positioning processes. In terms of positioning experiments, the method proposed in this study was compared to other line feature-based visual positioning methods under the EuRoC dataset and the real roadway scene. The comparison results revealed that the proposed method outperformed the real-time monocular visual SLAM with points and lines (PL-VINS) algorithm under nine EuRoC data sequences. Furthermore, in an anchor net-supported roadway scene, continuous TBM tracking was conducted within a range of 60 m. The proposed method yielded a maximum error of 163 mm, indicating a 23.5% reduction compared to the 213 mm obtained using the PL-VINS algorithm. Additionally, the root mean square error (RMSE) decreased from 0.531 to 0.426, suggesting a reduction of 19.8%. Overall, the visual positioning method proposed in this study enjoys high accuracy and stability, providing a valuable reference for long-distance pose detection of TBMs in underground anchor net-supported roadways.
Keywords
tunnel boring machine (TBM), visual positioning, image enhancement, line feature extraction and matching, motion estimation, anchor net feature, coal mine
DOI
10.12363/issn.1001-1986.25.03.0207
Recommended Citation
ZHANG Xuhui, CHI Yunkai, DU Yuyang,
et al.
(2025)
"A visual positioning method for tunnel boring machines in underground coal mines based on anchor net features,"
Coal Geology & Exploration: Vol. 53:
Iss.
6, Article 22.
DOI: 10.12363/issn.1001-1986.25.03.0207
Available at:
https://cge.researchcommons.org/journal/vol53/iss6/22
Reference
[1] 王国法,张建中,薛国华,等. 煤矿回采工作面智能地质保障技术进展与思考[J]. 煤田地质与勘探,2023,51(2):12−26.
WANG Guofa,ZHANG Jianzhong,XUE Guohua,et al. Progress and reflection of intelligent geological guarantee technology in coal mining face[J]. Coal Geology & Exploration,2023,51(2):12−26.
[2] 马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193−2206.
MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193−2206.
[3] 王海军,曹云,王洪磊. 煤矿智能化关键技术研究与实践[J]. 煤田地质与勘探,2023,51(1):44−54.
WANG Haijun,CAO Yun,WANG Honglei. Research and practice on key technologies for intelligentization of coal mine[J]. Coal Geology & Exploration,2023,51(1):44−54.
[4] 张旭辉,杨文娟,薛旭升,等. 煤矿远程智能掘进面临的挑战与研究进展[J]. 煤炭学报,2022,47(1):579−597.
ZHANG Xuhui,YANG Wenjuan,XUE Xusheng,et al. Challenges and developing of the intelligent remote control on road headers in coal mine[J]. Journal of China Coal Society,2022,47(1):579−597.
[5] 陶云飞,杨健健,李嘉赓,等. 基于惯性导航技术的掘进机位姿测量系统研究[J]. 煤炭技术,2017,36(1):235−237.
TAO Yunfei,YANG Jianjian,LI Jiageng,et al. Research on position and orientation measurement system of heading machine based on inertial navigation technology[J]. Coal Technology,2017,36(1):235−237.
[6] 吴文臻. 基于改进时间同步的矿井UWB优化定位方法[J]. 工矿自动化,2024,50(增刊1):34−38.
WU Wenzhen. Research on mine UWB optimized positioning method based on improved time synchronization[J]. Journal of Mine Automation,2024,50(Sup.1):34−38.
[7] 马宏伟,苏浩,薛旭升,等. 煤矿井下移动机器人激光标靶定位方法研究[J]. 煤炭科学技术,2024,52(11):60−73.
MA Hongwei,SU Hao,XUE Xusheng,et al. Research on laser target positioning method for underground mobile robot in coal mine[J]. Coal Science and Technology,2024,52(11):60−73.
[8] 马艾强,姚顽强. 煤矿井下移动机器人多传感器自适应融合SLAM方法[J]. 工矿自动化,2024,50(5):107−117.
MA Aiqiang,YAO Wanqiang. Multi sensor adaptive fusion SLAM method for underground mobile robots in coal mines[J]. Journal of Mine Automation,2024,50(5):107−117.
[9] SAHILI A R,HASSAN S,SAKHRIEH S M,et al. A survey of visual SLAM methods[J]. IEEE Access,2023,11:139643−139677.
[10] 杨文娟,张旭辉,张超,等. 基于三激光束标靶的煤矿井下长距离视觉定位方法[J]. 煤炭学报,2022,47(2):986−1001.
YANG Wenjuan,ZHANG Xuhui,ZHANG Chao,et al. Long distance vision localization method based on triple laser beams target in coal mine[J]. Journal of China Coal Society,2022,47(2):986−1001.
[11] 王华龙,陈彦泽,刘志成,等. 视觉即时定位与建图算法综述[J]. 计算机应用研究,2025,42(2):321−333.
WANG Hualong,CHEN Yanze,LIU Zhicheng,et al. Survey of visual simultaneous localization and mapping algorithms[J]. Application Research of Computers,2025,42(2):321−333.
[12] MU Qi,WANG Yuhao,LIANG Xin,et al. Autonomous localization and mapping method of mobile robot in underground coal mine based on edge computing[J]. Journal of Circuits,Systems and Computers,2024,33(1):2450018.
[13] 牟琦,梁鑫,郭媛婕,等. 边缘感知增强的煤矿井下视觉SLAM方法[J]. 煤田地质与勘探,2025,53(3):231−242.
MU Qi,LIANG Xin,GUO Yuanjie,et al. An edge awareness–enhanced visual SLAM method for underground coal mines[J]. Coal Geology & Exploration,2025,53(3):231−242.
[14] 王莉,臧天祥,苏波. 基于点线特征的煤矿井下机器人视觉SLAM算法[J/OL]. 煤炭科学技术,2025:1–14 [2025-03-31]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=MTKJ20250328001&dbname=CJFD&dbcode=CJFQ.
WANG Li,ZANG Tianxiang,SU Bo. Visual SLAM algorithm for underground robots in coal mines based on point–line features[J/OL]. Coal Science and Technology,2025:1–14 [2025-03-31]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=MTKJ20250328001&dbname=CJFD&dbcode=CJFQ.
[15] MUR–ARTAL R,TARDÓS J D. ORB–SLAM2:An open–source SLAM system for monocular,stereo,and RGB–D cameras[J]. IEEE Transactions on Robotics,2017,33(5):1255−1262.
[16] RUBLEE E,RABAUD V,KONOLIGE K,et al. ORB:An efficient alternative to SIFT or SURF[C]//2011 International Conference on Computer Vision. Barcelona:IEEE,2011:2564–2571.
[17] QIN Tong,LI Peiliang,SHEN Shaojie. VINS–Mono:A robust and versatile monocular visual–inertial state estimator[J]. IEEE Transactions on Robotics,2018,34(4):1004−1020.
[18] 彭晓星,罗霄,韩宝玲. 一种改进的Shi–Tomasi角点检测方法[J]. 计算机应用与软件,2022,39(12):241−245.
PENG Xiaoxing,LUO Xiao,HAN Baoling. An improved Shi–Tomasi corner detection method[J]. Computer Applications and Software,2022,39(12):241−245.
[19] 杨幸彬,吕京国,杨佳宾,等. 顾及影像尺度与旋转的光流点跟踪算法[J]. 计算机工程与设计,2021,42(2):426−431.
YANG Xingbin,LYU Jingguo,YANG Jiabin,et al. Optical flow tracking algorithm for scale and rotation variant images[J]. Computer Engineering and Design,2021,42(2):426−431.
[20] ZHOU Huizhong,ZOU Danping,PEI Ling,et al. StructSLAM:Visual SLAM with building structure lines[J]. IEEE Transactions on Vehicular Technology,2015,64(4):1364−1375.
[21] 龚坤,徐鑫,陈小庆,等. 弱纹理环境下融合点线特征的双目视觉同步定位与建图[J]. 光学 精密工程,2024,32(5):752−763.
GONG Kun,XU Xin,CHEN Xiaoqing,et al. Binocular vision SLAM with fused point and line features in weak texture environment[J]. Optics and Precision Engineering,2024,32(5):752−763.
[22] 邓文轩,党建武,雍玖. 基于目标检测和点线特征关联的动态SLAM算法[J/OL]. 激光与光电子学进展,2024:1–16 [2024-12-09]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=JGDJ2024120600E&dbname=CJFD&dbcode=CJFQ.
DENG Wenxuan,DANG Jianwu,YONG Jiu. Dynamic SLAM algorithm based on object detection and point line feature association[J/OL]. Laser & Optoelectronics Progress,2024:1–16 [2024-12-09]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=JGDJ2024120600E&dbname=CJFD&dbcode=CJFQ.
[23] PUMAROLA A,VAKHITOV A,AGUDO A,et al. PL–SLAM:Real–time monocular visual SLAM with points and lines[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore:IEEE,2017:4503–4508.
[24] VON GIOI R G,JAKUBOWICZ J,MOREL J M,et al. LSD:A line segment detector[J]. Image Processing on Line,2012,2(4):35−55.
[25] ZHANG Lilian,KOCH R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation,2013,24(7):794−805.
[26] LIN Shuyue,ZHANG Xuetao,LIU Yisha,et al. FLM PL–VIO:A robust monocular point–line visual–inertial odometry based on fast line matching[J]. IEEE Transactions on Industrial Electronics,2024,71(12):16026−16036.
[27] FU Qiang,WANG Jialong,YU Hongshan,et al. PL–VINS:Real–time monocular visual–inertial SLAM with point and line features[EB/OL]. 2020:2009. 07462. https://arxiv.org/abs/2009.07462v3.
[28] LAND E H. The Retinex theory of color vision[J]. Scientific American,1978,237(6):108−128.
[29] AKINLAR C,TOPAL C. EDLines:A real–time line segment detector with a false detection control[J]. Pattern Recognition Letters,2011,32(13):1633−1642.
[30] XU Yifan,XU Weijian,CHEUNG D,et al. Line segment detection using transformers without edges[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville:IEEE,2021:4255–4264.
[31] AGRAWAL V,VADLAMUDI N,WASEEM M,et al. LineTR:Unified text line segmentation for challenging palm leaf manuscripts[M]//Pattern recognition. Cham:Springer Nature Switzerland,2024:217–233.
Included in
Earth Sciences Commons, Mining Engineering Commons, Oil, Gas, and Energy Commons, Sustainability Commons