Lane detection using hough transformation and Yolov8

  • Nguyen Viet Bach

    University of Science and Technology of Hanoi, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Street, Cau Giay, Hanoi, Vietnam
  • Pham Xuan Tung

    University of Science and Technology of Hanoi, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Street, Cau Giay, Hanoi, Vietnam
Email: pham-xuan.tung@usth.edu.vn
Từ khóa: Image processing, deep learning, object detection, lane detection, AI, YOLO, OpenCV

Tóm tắt

Autonomous vehicles necessitate the integration of advanced technologies such as computer vision and deep learning to comprehend and navigate their surroundings. A crucial yet challenging component of this integration is the accurate detection of lanes, which can be influenced by a multitude of varying lane characteristics and conditions. This research undertakes a comparative analysis of lane detection methodologies, explicitly focusing on traditional image processing techniques and Convolutional Neural Networks (CNNs). The evaluation utilized a sample of 500 images from the CULane dataset, which encompasses a diverse range of traffic scenarios. Initially, a method incorporating Gaussian blurring, Canny edge detection, and Hough line transformation was examined. Despite its efficiency, operating at 30 frames per second, this approach exhibited a high error rate (average Mean Squared Error (MSE) of 0.537), which is attributable to the loss of critical image details during the preprocessing stage. Subsequently, the performance of a fine-tuned YOLOv8 model, trained on a reformatted version of the CULane dataset was assessed. The combination of object detection and subsequent Hough transformation yielded high accuracy, demonstrating the model’s ability to learn and identify relevant lane features. The deep CNNs demonstrated superior performance over classical image processing techniques in terms of lane detection accuracy, thereby underscoring their potential applicability within the realm of autonomous vehicle technology

Tài liệu tham khảo

[1]. D. Vighnesh, S. Ganesh, K. Hritish, D. Gaurav, Lane Detection Techniques using Image, in Processing of ITM Web of Conferences, August 2021. http://dx.doi.org/10.1051/itmconf/20214003011
[2]. Z. Wang, W. Ren, Q. Qiu, LaneNet: Real-Time Lane Detection Networks for Autonomous Driving, (2018). https://doi.org/10.48550/arXiv.1807.01726
[3] Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, Q. Wang, Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks, IEEE Transactions on Vehicular Technology, 69 (2020) 41-54. https://doi.org/10.1109/TVT.2019.2949603
[4]. R. Kulkarni, S. Dhavalikar, S. Bangar, Traffic Light Detection and Recognition for Self-Driving Cars Using Deep Learning, Fourth International Conference on Computing Communication Control and Automation (ICCUBEA) (2018) 1–4. https://doi.org/10.1109/ICCUBEA.2018.8697819
[5]. Dong-Joong Kang, Mun-Ho Jung, Road Lane segmentation using dynamic programming for active safety vehicles, Pattern Recognition Letters, 24 (2003) 3177-3185. https://doi.org/10.1016/j.patrec.2003.08.003
[6]. J. M. Collado, C. Hilario, A. Escalera, J. M. Armingol, Adaptative Road Lanes Detection and Classification, Lecture Notes in Computer Science, (2006). https://doi.org/10.1007/11864349_105
[7]. Wentao Yao, Zhidong Deng, Robust Real-Time Lane Marking Detection for Intelligent Vehicles in Urban Environment, Lecture Notes in Electrical Engineering, v122 (2011) 421-428. https://doi.org/10.1007/978-3-642-25553-3_52
[8]. Q. Lin, Y. Han, H. Hahn, Real-Time Lane Departure Detection Based on Extended Edge-Linking Algorithm, 2nd International Conference on Computer Research and Development, (2010).
https://doi.org/10.1109/ICCRD.2010.166
[9]. U. Suddamalla, S. Kundu, S. Farkade, A. Das, A Novel Algorithm of Lane Detection Addressing Varied Scenarios of Curved and Dashed Lanemarks, (2015) 87-92. https://doi.org/10.1109/IPTA.2015.7367103
[10]. V. Devane, G. Sahane, H. Khairmode, G. Datkhile, Lane Detection Techniques using Image Processing, ITM Web of Conferences 40 (2021). https://doi.org/10.1051/itmconf/20214003011
[11]. Jocher Glenn, Ayush Chaurasia, Jing Qiu, Ultralytics YOLOv8. Version 8.0.0. https://github.com/ultralytics/ultralytics (accessed 25 January 2024).
[12]. Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, Xiaoou Tang, Spatial as Deep: Spatial CNN for Traffic Scene Understanding, Proceedings of the AAAI Conference on Artificial Intelligence, (2018) 7276–7283. https://doi.org/10.48550/arXiv.1712.06080
[13]. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C. Lawrence Zitnick, Microsoft COCO: Common Objects in Context, 2014.
https://doi.org/10.1007/978-3-319-10602-1_48

Tải xuống

Chưa có dữ liệu thống kê
Nhận bài
15/12/2023
Nhận bài sửa
25/04/2024
Chấp nhận đăng
07/05/2024
Xuất bản
15/05/2024
Chuyên mục
Công trình khoa học
Số lần xem tóm tắt
130
Số lần xem bài báo
87