An Encoder-Sequencer-Decoder Network for Lane Detection to Facilitate Autonomous Driving

Lane detection in all weather conditions is a pressing necessity for autonomous driving. Accurate lane detection ensures the safe operation of autonomous vehicles, enabling advanced driver assistance systems to effectively track and maintain the vehicle within the lanes. Traditional lane detection t...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International Conference on Control, Automation and Systems (Online) s. 899 - 904
Hlavní autori: Hussain, Muhammad Ishfaq, Rafique, Muhammad Aasim, Ko, Yeongmin, Khan, Zafran, Olimov, Farrukh, Naz, Zubia, Kim, Jeongbae, Jeon, Moongu
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: ICROS 17.10.2023
Predmet:
ISSN:2642-3901
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Lane detection in all weather conditions is a pressing necessity for autonomous driving. Accurate lane detection ensures the safe operation of autonomous vehicles, enabling advanced driver assistance systems to effectively track and maintain the vehicle within the lanes. Traditional lane detection techniques heavily rely on a single image frame captured by the camera, posing limitations. Moreover, these conventional methods demand a constant stream of pristine images for uninterrupted lane detection, resulting in degraded performance when faced with challenges such as low brightness, shadows, occlusions, and deteriorating environmental conditions. Recognizing that continuous sequence patterns on the road represent lanes, our approach leverages a sequential model to process multiple images for lane detection. In this study, we propose a deep neural network model to extract crucial lane information from a sequence of images. Our model adopts a convolutional neural network in an encoder/decoder architecture and incorporates an extended short-term memory model for sequential feature extraction. We evaluate the performance of our proposed model using the TuSimple and CuLane datasets, showcasing its superiority across various lane detection scenarios. Comparative analysis with state-of-the-art lane detection methods further substantiates our model's effectiveness.
ISSN:2642-3901
DOI:10.23919/ICCAS59377.2023.10316884