[1] 权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768−776.

QUAN Meixiang, PIAO Songhao, LI Guo. An overview of Visual SLAM [J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768−776.
[2] 王朋, 郝伟龙, 倪翠, 等. 视觉SLAM方法综述[J]. 北京航空航天大学学报, 2024, 50(2): 359−367.

WANG Peng, HAO Weilong, NI Cui, et al. An overview of visual SLAM methods [J]. Journal of Beijing University of Aeronautics and Astronautics, 2024, 50(2): 359−367.
[3] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: an efficient alternative to SIFT or SURF[C]//ZABIH R. 2011 International Conference on Computer Vision (ICCV). Barcelona: IEEE, 2011: 2564−2571.
[4] MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras [J]. IEEE Transactions on Robotics, 2017, 33(5): 1255−1262.
[5] 赵洋, 刘国良, 田国会, 等. 基于深度学习的视觉SLAM综述[J]. 机器人, 2017, 39(6): 889−896.

ZHAO Yang, LIU Guoliang, TIAN Guohui, et al. A survey of visual SLAM based on deep learning [J]. Robot, 2017, 39(6): 889−896.
[6] GARFORTH J, WEBB B. Visual appearance analysis of forest scenes for monocular SLAM[C]//DUDEK G. 2019 International Conference on Robotics and Automation (ICRA). Montreal: IEEE, 2019: 1794−1800.
[7] 王霞, 左一凡, 李磊磊, 等. 基于自适应纹理复杂度的仿生视觉导航方法研究[J]. 导航定位与授时, 2020, 7(4): 35−41.

WANG Xia, ZUO Yifan, LI Leilei, et al. Bionic visual navigation method based on adaptive texture complexity [J]. Navigation Positioning and Timing, 2020, 7(4): 35−41.
[8] 邹斌, 赵小虎, 尹智帅. 基于改进ORB的图像特征匹配算法研究[J]. 激光与光电子学进展, 2021, 58(2): 0210006-1−0210006-8.

ZOU Bin, ZHAO Xiaohu, YIN Zhishuai. Image feature matching algorithm based on improved ORB [J]. Laser & Optoelectronics Progress, 2021, 58(2): 0210006-1−0210006-8.
[9] 刘天赐, 宋延嵩, 李金旺, 等. 基于ORB特征的高分辨率图像拼接改进算法[J]. 激光与光电子学进展, 2021, 58(8): 0810004-1−0810004-8.

LIU Tianci, SONG Yansong, LI Jinwang, et al. Improved algorithm for high-resolution image stitching based on ORB features [J]. Laser & Optoelectronics Progress, 2021, 58(8): 0810004-1−0810004-8.
[10] 郭俊阳, 胡德勇, 潘祥, 等. 基于改进ORB特征的图像处理方法[J]. 海南热带海洋学院学报, 2024, 31(2): 47−52.

GUO Junyang, HU Deyong, PAN Xiang, et al. Image processing method based on improved ORB features [J]. Journal of Hainan Tropical Ocean University, 2024, 31(2): 47−52.
[11] 冯丽琦, 赵亚琴, 孙一超, 等. 复杂环境下森林火灾火焰局部纹理提取方法[J]. 中国农机化学报, 2019, 40(7): 103−108.

FENG Liqi, ZHAO Yaqin, SUN Yichao, et al. Method for extracting local texture of forest fire flame in complex environment [J]. Journal of Chinese Agricultural Mechanization, 2019, 40(7): 103−108.
[12] 崔鑫彤. 面向林业机器人视觉系统的特征匹配方法研究[D]. 北京: 北京林业大学, 2015.

CUI Xintong. Feature Matching for Vision System in Forestry Robot[D]. Beijing: Beijing Forestry University, 2015.
[13] ZUIDERVELD K. Graphics Gems IV [M]. San Diego: Elsevier, 1994: 474−485.
[14] PIZER S M, AMBURN E P, AUSTIN J D, et al. Adaptive histogram equalization and its variations [J]. Computer Vision, Graphics, and Image Processing, 1987, 39(3): 355−368.
[15] REZA A M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement [J]. Journal of VLSI Signal Processing Systems for Signal Image and Video Technology, 2004, 38(1): 35−44.
[16] 张铮, 王孙强, 熊盛辉, 等. 结合小波变换和CLAHE的图像增强算法[J]. 现代电子技术, 2022, 45(3): 48−51.

ZHANG Zheng, WANG Sunqiang, XIONG Shenghui, et al. Image enhancement algorithm based on wavelet transform and CLAHE [J]. Modern Electronics Technique, 2022, 45(3): 48−51.
[17] 张艳, 张明路, 蒋志宏, 等. 基于改进的LIP算法低照度图像增强算法[J]. 电子测量与仪器学报, 2019, 33(11): 147−154.

ZHANG Yan, ZHANG Minglu, JIANG Zhihong, et al. Low-illumination image enhancement based on improved LIP model [J]. Journal of Electronic Measurement and Instrumentation, 2019, 33(11): 147−154.
[18] 明浩, 苏喜友. 利用特征分割和病斑增强的杨树叶部病害识别[J]. 浙江农林大学学报, 2020, 37(6): 1159−1166.

MING Hao, SU Xiyou. Image recognition of poplar leaf diseases with feature segmentation and lesion enhancement [J]. Journal of Zhejiang A&F University, 2020, 37(6): 1159−1166.
[19] 宋俊杰, 宋欣, 何建祥, 等. 基于图像增强的温室图像特征点提取[J]. 计算技术与自动化, 2022, 41(2): 92−99.

SONG Junjie, SONG Xin, HE Jianxiang, et al. Feature points extraction of greenhouse image based on image enhancement [J]. Computing Technology and Automation, 2022, 41(2): 92−99.
[20] ITO T, ISHIHARA K, DEURA I, et al. Tissue characterization of uterine myometrium using the ultrasound gray-level histogram width [J]. Journal of Medical Ultrasonics, 2007, 34(3): 189−192.
[21] 崔建伟, 王冬青, 刘金燕. 基于高斯模糊的单幅图像去雾算法[J]. 自动化与仪器仪表, 2021(1): 9−11.

CUI Jianwei, WANG Dongqing, LIU Jinyan. Single image dehazing algorithms based on Gaussian blur [J]. Automation and Instrumentation, 2021(1): 9−11.
[22] HARRIS C, STEPHENS M. A combined corner and edge detector[C]// British Machine Vision Association. Proceedings of the 4th Alvey Vision Conference (AVC) . Sheffield: University of Sheffield Press, 1988: 147−151.
[23] 黎达, 邢艳秋, 黄佳鹏, 等. 基于双目立体视觉SLAM的林下实时定位[J]. 中南林业科技大学学报, 2021, 41(2): 16−22, 34.

LI Da, XING Yanqiu, HUANG Jiapeng, et al. Real-time positioning in forests based on binocular stereo visual SLAM [J]. Journal of Central South University of Forestry & Technology, 2021, 41(2): 16−22, 34.
[24] 高程程, 惠晓威. 基于灰度共生矩阵的纹理特征提取[J]. 计算机系统应用, 2010, 19(6): 195−198.

GAO Chengcheng, HUI Xiaowei. GLCM-based texture feature extraction [J]. Computer Systems Applications, 2010, 19(6): 195−198.
[25] 冯建辉, 杨玉静. 基于灰度共生矩阵提取纹理特征图像的研究[J]. 北京测绘, 2007, 21(3): 19−22.

FENG Jianhui, YANG Yujing. Study of texture images extraction based on gray level co-occurence matrix [J]. Beijing Surveying and Mapping, 2007, 21(3): 19−22.
[26] GONZALEZ R C, WOODS R E. 数字图像处理(2007)[M]. 阮秋琦, 译. 3版. 北京: 电子工业出版社, 2011.

GONZALEZ R C, WOODS R E. Digital Image Processing (2007)[M]. RUAN Qiuqi tran. 3rd ed. Beijing: Publishing House of Electronics Industry, 2011.
[27] 董心玉, 范文义, 田甜. 基于面向对象的资源3号遥感影像森林分类研究[J]. 浙江农林大学学报, 2016, 33(5): 816−825.

DONG Xinyu, FAN Wenyi, TIAN Tian. Object-based forest type classification with ZY-3 remote sensing data [J]. Journal of Zhejiang A&F University, 2016, 33(5): 816−825.
[28] NANNI L, BRAHNAM S, GHIDONI S, et al. Different approaches for extracting information from the co-occurrence matrix[J/OL]. PLoS One, 2013, 8(12): e83554[2024-11-24]. DOI: 10.1371/journal.pone.0083554.