留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于双注意力机制的车道线检测

任凤雷,周海波,杨璐,何昕

downloadPDF
任凤雷, 周海波, 杨璐, 何昕. 基于双注意力机制的车道线检测[J]. , 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033
引用本文: 任凤雷, 周海波, 杨璐, 何昕. 基于双注意力机制的车道线检测[J]. , 2023, 16(3): 645-653.doi:10.37188/CO.2022-0033
REN Feng-lei, ZHOU Hai-bo, YANG Lu, HE Xin. Lane detection based on dual attention mechanism[J]. Chinese Optics, 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033
Citation: REN Feng-lei, ZHOU Hai-bo, YANG Lu, HE Xin. Lane detection based on dual attention mechanism[J].Chinese Optics, 2023, 16(3): 645-653.doi:10.37188/CO.2022-0033

基于双注意力机制的车道线检测

doi:10.37188/CO.2022-0033
基金项目:天津市自然科学基金重点项目(No. 17JCZDJC30400);广东省重点领域研发计划项目(No. 2019B090922002)
详细信息
    作者简介:

    任凤雷(1991—),男,河北沧州人,工学博士,讲师,2015年于吉林大学获得学士学位,2020年于中国科学院长春光学精密机械与物理研究所获得博士学位,主要从事数字图像处理,自动驾驶,视觉环境感知方面的研究。E-mail:renfenglei15@mails.ucas.edu.cnrenfenglei15@mails.ucas.edu.cn

    周海波(1973—),男,黑龙江肇东人,博士,教授,博士生导师,1998年、2005年于佳木斯大学分别获得学士、硕士学位,2009年于吉林大学获得博士学位,主要从事计算机视觉、人工智能、智能机器人技术等方面的研究。E-mail:haibo_zhou@163.com

  • 中图分类号:TP394.1

Lane detection based on dual attention mechanism

Funds:Supported by Key projects of Tianjin Natural Science Foundation (No. 17JCZDJC30400); Special Project for Research and Development in Key Areas of Guangdong Province (No. 2019B090922002)
More Information
  • 摘要:

    为了提升车道线检测算法在障碍物遮挡等复杂情况下的检测性能,本文提出了一种基于双注意力机制的多车道线检测算法。首先,本文通过设计基于空间和通道双注意力机制的车道线语义分割网络,得到分别代表车道线像素和背景区域的二值分割结果;然后,引入HNet网络结构,使用其输出的透视变换矩阵将分割图转换为鸟瞰视图,继而进行曲线拟合并逆变换回原图像空间,实现多车道线的检测;最后,将图像中线两侧车道线所包围的区域定义为目前行驶的行车车道。本文算法在Tusimple数据集凭借134 frame/s的实时性表现达到了96.63%的准确率,在CULane数据集取得了77.32%的精确率。实验结果表明,本文算法可以针对包括障碍物遮挡等不同场景下的多条车道线及行车车道进行实时检测,其性能相比较现有算法得到了显著的提升。

  • 图 1车道线检测示意图

    Figure 1.Schematic diagram of lane detection

    图 2图像语义分割示意图

    Figure 2.Schematic diagram of semantic segmentation of image

    图 3本文车道线检测算法示意图

    Figure 3.Schematic diagram of proposed lane detection algorithm

    图 4扩张卷积示意图。(从左至右r值分别为1、2和4)

    Figure 4.Diagram of atrous convolution. (r=1, 2, 4 from left to right)

    图 5空间注意力机制示意图

    Figure 5.Schematic diagram of the position attention module

    图 6通道注意力机制示意图

    Figure 6.Schematic diagram of the channel attention module

    图 7本文算法Tusimple数据集车道线检测结果

    Figure 7.Lane detection results of proposed algorithm on Tusimple

    图 8本文算法CULane数据集车道线检测结果

    Figure 8.Lane detection results of our algorithm on CULane

    表 1本文算法在Tusimple数据集定量实验结果

    Table 1.Quantitative experiment results of proposed algorithm on Tusimple

    Method acc(%) FP(%) FN(%) FPS
    SCNN[18] 96.53 6.17 1.80 7.5
    LaneNet[13] 96.38 7.80 2.44 52.6
    PolylaneNet[19] 93.36 9.42 9.33 115
    FastDraw[20] 95.20 7.60 4.50 90.3
    R-50-E2E[21] 96.04 3.11 4.09
    Ours 96.63 6.02 2.03 134
    下载: 导出CSV

    表 2CULane数据集定量实验结果

    Table 2.Quantitative experiment results of proposed algorithm on CULane

    Method Normal Crowd Dazzle Shadow Noline
    SCNN[18] 90.60 69.70 58.50 66.90 43.40
    FastDraw[20] 85.90 63.60 57.00 69.90 40.60
    UFSD-18[1] 87.70 66.00 58.40 62.80 40.20
    UFSD-34[1] 90.70 70.20 59.50 69.30 44.40
    LaneATT[22] 91.17 72.71 65.82 68.03 49.13
    Ours 91.21 76.33 69.51 73.25 50.16
    Method Arrow Curve Cross Night Total
    SCNN[18] 84.10 64.40 1990 66.10 71.60
    FastDraw[20] 79.40 65.20 7013 57.80 -
    UFSD-18[1] 81.00 57.90 1743 62.10 68.40
    UFSD-34[1] 85.70 69.50 2037 66.70 72.30
    LaneATT[22] 87.82 63.75 1020 68.58 75.13
    Ours 88.72 71.25 1265 70.73 77.32
    下载: 导出CSV
  • [1] QIN Z Q, WANG H Y, LI X. Ultra fast structure-aware deep lane detection[C].Proceedingsofthe16thEuropeanConferenceonComputerVision, Springer, 2020: 276-291.
    [2] 陈晓冬, 艾大航, 张佳琛, 等. Gabor滤波融合卷积神经网络的路面裂缝检测方法[J]. 中国光学,2020,13(6):1293-1301.doi:10.37188/CO.2020-0041

    CHEN X D, AI D H, ZHANG J CH,et al. Gabor filter fusion network for pavement crack detection[J].Chinese Optics, 2020, 13(6): 1293-1301. (in Chinese)doi:10.37188/CO.2020-0041
    [3] 任凤雷, 何昕, 魏仲慧, 等. 基于DeepLabV3+与超像素优化的语义分割[J]. 光学 精密工程,2019,27(12):2722-2729.doi:10.3788/OPE.20192712.2722

    REN F L, HE X, WEI ZH H,et al. Semantic segmentation based on DeepLabV3+ and superpixel optimization[J].Optics and Precision Engineering, 2019, 27(12): 2722-2729. (in Chinese)doi:10.3788/OPE.20192712.2722
    [4] YU ZH P, REN X ZH, HUANG Y Y,etal. . Detecting lane and road markings at a distance with perspective transformer layers[C].Proceedingsofthe23rdInternationalConferenceonIntelligentTransportationSystems, IEEE, 2020: 1-6.
    [5] CHIU K Y, LIN S F. Lane detection using color-based segmentation[C].ProceedingsoftheIEEEIntelligentVehiclesSymposium, IEEE, 2005: 706-711.
    [6] HUR J, KANG S N, SEO S W. Multi-lane detection in urban driving environments using conditional random fields[C].Proceedingsof2013IEEEIntelligentVehiclesSymposium(IV), IEEE, 2013: 1297-1302.
    [7] JUNG H, MIN J, KIM J. An efficient lane detection algorithm for lane departure detection[C].Proceedingsof2013IEEEIntelligentVehiclesSymposium(IV), IEEE, 2013: 976-981.
    [8] BORKAR A, HAYES M, SMITH M T. A novel lane detection system with efficient ground truth generation[J].IEEE Transactions on Intelligent Transportation Systems, 2012, 13(1): 365-374.doi:10.1109/TITS.2011.2173196
    [9] VAN GANSBEKE W, DE BRABANDERE B, NEVEN D,etal. . End-to-end lane detection through differentiable least-squares fitting[C].Proceedingsof2019IEEE/CVFInternationalConferenceonComputerVisionWorkshop, IEEE, 2019: 905-913.
    [10] LIU T, CHEN ZH W, YANG Y,etal. . Lane detection in low-light conditions using an efficient data enhancement: light conditions style transfer[C].Proceedingsof2020IEEEIntelligentVehiclesSymposium, IEEE, 2020: 1394-1399.
    [11] CHANG D, CHIRAKKAL V, GOSWAMI S,etal. . Multi-lane detection using instance segmentation and attentive voting[C].Proceedingsofthe19thInternationalConferenceonControl,AutomationandSystems, IEEE, 2020: 1538-1542.
    [12] KIM J, LEE M. Robust lane detection based on convolutional neural network and random sample consensus[C].Proceedingsofthe21stInternationalConferenceonNeuralInformationProcessing, Springer, 2014: 454-461.
    [13] NEVEN D, DE BRABANDERE B, GEORGOULIS S,etal. . Towards end-to-end lane detection: an instance segmentation approach[C].Proceedingsof2018IEEEintelligentvehiclessymposium(IV), IEEE, 2018: 286-291.
    [14] LEE H, SOHN K, MIN D. Unsupervised low-light image enhancement using bright channel prior[J].IEEE Signal Processing Letters, 2020, 27: 251-255.doi:10.1109/LSP.2020.2965824
    [15] YOO S, LEE H S, MYEONG H,etal. . End-to-end lane marker detection via row-wise classification[C].Proceedingsof2020IEEE/CVFConferenceonComputerVisionandPatternRecognitionWorkshops, IEEE, 2020: 4335-4343.
    [16] FU J, LIU J, TIAN H J,etal. . Dual attention network for scene segmentation[C].Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR), IEEE, 2019: 3141-3149.
    [17] HE K M, ZHANG X Y, REN SH Q,etal. . Deep residual learning for image recognition[C].Proceedingsof2016IEEEConferenceonComputerVisionandPatternRecognition, IEEE, 2016: 770-778.
    [18] PAN X G, SHI J P, LUO P,etal. . Spatial as deep: spatial CNN for traffic scene understanding[C].Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI Press, 2018: 7276-7283.
    [19] CHEN ZH P, LIU Q F, LIAN CH F. PointLaneNet: efficient end-to-end CNNs for accurate real-time lane detection[C].Proceedingsof2019IEEEIntelligentVehiclesSymposium(IV), IEEE, 2019: 2563-2568.
    [20] PHILION J. FastDraw: addressing the long tail of lane detection by adapting a sequential prediction network[C].Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition, IEEE, 2019: 11574-11583.
    [21] YOO S, LEE H S, MYEONG H,etal. . End-to-end lane marker detection via row-wise classification[C].Proceedingsof2020IEEE/CVFConferenceonComputerVisionandPatternRecognitionWorkshops, IEEE, 2020: 4335-4343.
    [22] TABELINI L, BERRIEL R, PAIXÃO T M,etal. . Keep your eyes on the lane: Real-time attention-guided lane detection[C].Proceedingsof2021IEEE/CVFConferenceonComputerVisionandPatternRecognition, IEEE, 2021: 294-302.
    [23] 陈晓冬, 盛婧, 杨晋, 等. 多参数Gabor预处理融合多尺度局部水平集的超声图像分割[J]. 中国光学,2020,13(5):1075-1084.doi:10.37188/CO.2020-0025

    CHEN X D, SHENG J, YANG J,et al. Ultrasound image segmentation based on a multi-parameter Gabor filter and multiscale local level set method[J].Chinese Optics, 2020, 13(5): 1075-1084. (in Chinese)doi:10.37188/CO.2020-0025
    [24] 周文舟, 范晨, 胡小平, 等. 多尺度奇异值分解的偏振图像融合去雾算法与实验[J]. 中国光学,2021,14(2):298-306.doi:10.37188/CO.2020-0099

    ZHOU W ZH, FAN CH, HU X P,et al. Multi-scale singular value decomposition polarization image fusion defogging algorithm and experiment[J].Chinese Optics, 2021, 14(2): 298-306. (in Chinese)doi:10.37188/CO.2020-0099
  • 加载中
图(8)/ 表(2)
计量
  • 文章访问数:768
  • HTML全文浏览量:424
  • PDF下载量:285
  • 被引次数:0
出版历程
  • 收稿日期:2022-03-04
  • 修回日期:2022-04-06
  • 网络出版日期:2022-06-16

目录

    /

      返回文章
      返回
        Baidu
        map