Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm

Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure estimates in complex traffic conditions have proven to be hard in autonomous driving research. Traditional lane line detection methods requi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Visual computer Jg. 39; H. 2; S. 519 - 538
Hauptverfasser: Haris, Malik, Hou, Jin, Wang, Xiaomin
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2023
Springer Nature B.V
Schlagworte:
ISSN:0178-2789, 1432-2315
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure estimates in complex traffic conditions have proven to be hard in autonomous driving research. Traditional lane line detection methods require manual parameter modification, but they have some limitations that are still susceptible to interference from obscuring objects, lighting changes, and pavement deterioration. The development of accurate lane line detection and departure estimate algorithms is still a challenge. This article investigated a convolutional neural network (CNN) for lane line detection and departure estimate in a complicated road environment. CNN includes a weight-sharing function that lowers the training parameters. CNN can learn and extract features frequently in image segmentation, object detection, classification, and other applications. The symmetric kernel convolution of classical CNN is upgraded to the structure of asymmetric kernel convolution (AK-CNN) based on lane line detection and departure estimation features. It reduces the CNN network's computational load and improves the speed of lane line detection and departure estimates. The experiment was carried out on the CULane dataset. The lane line detection results have high accuracy in a complex environment by 80.3%. The detection speed is 84.5 fps, which enables real-time lane line detection.
AbstractList Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure estimates in complex traffic conditions have proven to be hard in autonomous driving research. Traditional lane line detection methods require manual parameter modification, but they have some limitations that are still susceptible to interference from obscuring objects, lighting changes, and pavement deterioration. The development of accurate lane line detection and departure estimate algorithms is still a challenge. This article investigated a convolutional neural network (CNN) for lane line detection and departure estimate in a complicated road environment. CNN includes a weight-sharing function that lowers the training parameters. CNN can learn and extract features frequently in image segmentation, object detection, classification, and other applications. The symmetric kernel convolution of classical CNN is upgraded to the structure of asymmetric kernel convolution (AK-CNN) based on lane line detection and departure estimation features. It reduces the CNN network's computational load and improves the speed of lane line detection and departure estimates. The experiment was carried out on the CULane dataset. The lane line detection results have high accuracy in a complex environment by 80.3%. The detection speed is 84.5 fps, which enables real-time lane line detection.
Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure estimates in complex traffic conditions have proven to be hard in autonomous driving research. Traditional lane line detection methods require manual parameter modification, but they have some limitations that are still susceptible to interference from obscuring objects, lighting changes, and pavement deterioration. The development of accurate lane line detection and departure estimate algorithms is still a challenge. This article investigated a convolutional neural network (CNN) for lane line detection and departure estimate in a complicated road environment. CNN includes a weight-sharing function that lowers the training parameters. CNN can learn and extract features frequently in image segmentation, object detection, classification, and other applications. The symmetric kernel convolution of classical CNN is upgraded to the structure of asymmetric kernel convolution (AK-CNN) based on lane line detection and departure estimation features. It reduces the CNN network's computational load and improves the speed of lane line detection and departure estimates. The experiment was carried out on the CULane dataset. The lane line detection results have high accuracy in a complex environment by 80.3%. The detection speed is 84.5 fps, which enables real-time lane line detection.
Author Haris, Malik
Wang, Xiaomin
Hou, Jin
Author_xml – sequence: 1
  givenname: Malik
  orcidid: 0000-0002-6450-1715
  surname: Haris
  fullname: Haris, Malik
  organization: School of Information Science and Technology, Southwest Jiaotong University, National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University
– sequence: 2
  givenname: Jin
  orcidid: 0000-0001-7438-5327
  surname: Hou
  fullname: Hou, Jin
  email: jhou@swjtu.edu.cn
  organization: School of Information Science and Technology, Southwest Jiaotong University, National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University
– sequence: 3
  givenname: Xiaomin
  orcidid: 0000-0003-4934-4288
  surname: Wang
  fullname: Wang, Xiaomin
  organization: School of Information Science and Technology, Southwest Jiaotong University, National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University
BookMark eNp9kMtOwzAQRS1UJNrCD7CyxDrgR1onS1TxkiqxgbXlOJPiktjFdiry97gNEhKLLuzR6N4z9twZmlhnAaFrSm4pIeIuEMIFzQg7HL7g2fIMTWnOWcY4XUzQlFBRZEwU5QWahbAlqRd5OUXDWlnArUlXDRF0NM5iZevU7ZSPvQcMIZpOHQWTNKxdt2vhG4PdG-9sBzbiasB9MHaTUKzC0HUQvdH4E7yFNhF279p-nN1unDfxo7tE541qA1z91jl6f3x4Wz1n69enl9X9OtOcljFjoJSqcsG55oLXQjDGqlzRqgDR8CZVliuxZCXJK9YUNVOiFrpJImvKpmR8jm7GuTvvvvq0jNy63tv0pGQlLShfFJwmVzG6tHcheGikNvG4dPTKtJISeQhajkHLFLQ8Bi2XCWX_0J1PgfnhNMRHKCSz3YD_-9UJ6gf4XJUc
CitedBy_id crossref_primary_10_3390_s23020789
crossref_primary_10_1007_s00371_024_03626_6
crossref_primary_10_1177_09544070221140973
crossref_primary_10_3390_s22041425
crossref_primary_10_3390_s22155595
crossref_primary_10_1007_s00371_023_02927_6
crossref_primary_10_1007_s11517_024_03025_y
crossref_primary_10_1038_s41598_025_86894_z
crossref_primary_10_1007_s00371_024_03275_9
crossref_primary_10_3390_drones8070330
crossref_primary_10_1038_s41598_025_01167_z
crossref_primary_10_1186_s40537_025_01101_0
crossref_primary_10_3390_s24082505
crossref_primary_10_1007_s00371_024_03268_8
Cites_doi 10.12720/joace.3.3.258-264
10.1109/IMCEC51613.2021.9482067
10.1016/j.image.2021.116413
10.1007/s00371-021-02103-8
10.1007/s00371-014-0918-5
10.1109/CVPR46437.2021.01390
10.3390/s20174719
10.1109/CVPRW50498.2020.00511
10.1007/978-3-030-58523-5_42
10.1016/j.aap.2017.12.001
10.1007/s00371-021-02161-y
10.1109/ICCV48922.2021.00375
10.1109/ICCV.2019.00301
10.1109/CVPR46437.2021.00036
10.1109/TIP.2020.2982832
10.3390/ELECTRONICS10161932
10.1007/978-3-030-72073-5_14
10.1109/TITS.2012.2184756
10.1109/CVPR.2019.00902
10.1109/TPAMI.2017.2699184
10.1109/WACV.2017.90
10.1109/CVPRW.2016.12
10.1049/iet-its.2017.0143
10.1109/ICMA49215.2020.9233837
10.1109/ICoIAS.2018.8494031
10.1007/s00371-019-01724-4
10.1109/MITS.2012.2189969
10.1007/s00371-021-02196-1
10.1109/IVS.2012.6232168
10.24963/ijcai.2021/138
10.1109/IVS.2016.7535517
10.1109/IVS.2017.7995911
10.1109/ICSCAN.2019.8878706
10.1007/978-3-030-58555-6_41
10.1007/978-3-030-58586-0_17
10.1109/TITS.2021.3088488
10.1109/LGRS.2021.3098774
10.3390/electronics10091102
10.1609/aaai.v32i1.12301
10.1109/ICCV.2019.00110
10.1109/TITS.2019.2926042
10.1109/ITNEC.2017.8284972
10.1007/PL00013394
10.1109/TNNLS.2016.2522428
10.1109/ICCV.2019.00059
10.1109/TITS.2006.874707
10.1007/s00371-020-02033-x
10.1049/iet-ipr.2013.0371
10.1109/TITS.2006.869595
10.1007/978-3-319-12637-1_57
10.1016/j.neunet.2016.12.002
10.1109/TIP.2015.2475625
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021
The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021
– notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.
DBID AAYXX
CITATION
8FE
8FG
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
DOI 10.1007/s00371-021-02353-6
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
Technology Collection
ProQuest One
ProQuest Central Korea
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
DatabaseTitle CrossRef
Advanced Technologies & Aerospace Collection
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
ProQuest One Academic Eastern Edition
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central China
ProQuest Central
Advanced Technologies & Aerospace Database
ProQuest One Applied & Life Sciences
ProQuest One Academic UKI Edition
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList Advanced Technologies & Aerospace Collection

Database_xml – sequence: 1
  dbid: P5Z
  name: Advanced Technologies & Aerospace Database
  url: https://search.proquest.com/hightechjournals
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1432-2315
EndPage 538
ExternalDocumentID 10_1007_s00371_021_02353_6
GrantInformation_xml – fundername: department of science and technology of sichuan province
  grantid: 2019YFH0097; 2020YFG0353
  funderid: http://dx.doi.org/10.13039/501100004829
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29R
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
6TJ
78A
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYOK
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDPE
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADQRH
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT4
PT5
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TN5
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
YOT
Z45
Z5O
Z7R
Z7S
Z7X
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8R
Z8T
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
8FE
8FG
AZQEC
DWQXO
GNUQQ
JQ2
P62
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
ID FETCH-LOGICAL-c319t-2eaaab4733c373d77222b4a1b8e7f3f1b824a762904b2f8d2a7d7cfe7f2f9f923
IEDL.DBID P5Z
ISICitedReferencesCount 14
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000741634600004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0178-2789
IngestDate Wed Nov 05 01:30:11 EST 2025
Sat Nov 29 02:23:29 EST 2025
Tue Nov 18 21:27:00 EST 2025
Fri Feb 21 02:45:32 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Asymmetric kernel CNN (AK-CNN)
Lane line detection
Lane departure estimation
CULane dataset
Scale perception
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-2eaaab4733c373d77222b4a1b8e7f3f1b824a762904b2f8d2a7d7cfe7f2f9f923
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-4934-4288
0000-0001-7438-5327
0000-0002-6450-1715
PQID 2918135831
PQPubID 2043737
PageCount 20
ParticipantIDs proquest_journals_2918135831
crossref_citationtrail_10_1007_s00371_021_02353_6
crossref_primary_10_1007_s00371_021_02353_6
springer_journals_10_1007_s00371_021_02353_6
PublicationCentury 2000
PublicationDate 20230200
2023-02-00
20230201
PublicationDateYYYYMMDD 2023-02-01
PublicationDate_xml – month: 2
  year: 2023
  text: 20230200
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle International Journal of Computer Graphics
PublicationTitle The Visual computer
PublicationTitleAbbrev Vis Comput
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Kumawat, A. Panda, S.: A robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL). Vis. Comput. 1–22 (2021)
Xiong, Y., et al.: “Upsnet: a unified panoptic segmentation network. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019, pp. 8810–8818. https://doi.org/10.1109/CVPR.2019.00902
GuotianFANBoLIQinHANRihuaJGangQURobust lane detection and tracking based on machine visionZTE Commun.20211846977
TranNGlobal Status Report on Road Safety2018GenevaWorld Health Organization511
YeYYHaoXLChenHJLane detection method based on lane structural analysis and CNNsIET Intel. Transport Syst.201812651352010.1049/iet-its.2017.0143
Kim, J., Lee, M.: Robust lane detection based on convolutional neural network and random sample consensus. Lecture Notes Computer Science (including Subseries in Lecture Notes Artificial Intelligence, Lecture Notes Bioinformatics), vol. 8834, pp. 454–461 (2014). https://doi.org/10.1007/978-3-319-12637-1_57
LiJMeiXProkhorovDTaoDDeep neural network for structural prediction and lane detection in traffic sceneIEEE Trans. Neural Netw. Learn. Syst.201628369070310.1109/TNNLS.2016.2522428
MammarSGlaserSNettoMTime to line crossing for lane departure avoidance: a theoretical study and an experimental settingIEEE Trans. Intell. Transp. Syst.20067222624110.1109/TITS.2006.874707
Su, J., Chen, C., Zhang, K., Luo, J., Wei, X., Wei, X.: Structure guided lane detection. arXiv Prepr. arXiv2105.05403 (2021)
Xu, H., Wang, S., Cai, X., Zhang, W., Liang, X., Li, Z.: Curvelane-nas: unifying lane-sensitive architecture search and adaptive point blending. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, pp. 689–704 (2020)
Choi, J., Chun, D., Kim, H., Lee, H.J.: Gaussian YOLOv3: an accurate and fast object detector using localization uncertainty for autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 502–511 (2019). https://doi.org/10.1109/ICCV.2019.00059
Gao, Q., Feng, Y., Wang, L.: A real-time lane detection and tracking algorithm. In: IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 1230–1234 (2017)
Li, H.T., Todd, Z., Bielski, N., Carroll, F.: 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation. Vis. Comput. 1–16 (2021)
WangXLiuYHaiDLane detection method based on double ROI and varied-line-spacing-scanningJ. Command Control201732154159
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. Arxiv, 2016, [Online]. Available: http://arxiv.org/abs/1603.04467
McCallJCTrivediMMVideo-based lane estimation and tracking for driver assistance: survey, system, and evaluationIEEE Trans. Intell. Transp. Syst.200671203710.1109/TITS.2006.869595
Li, X., He, M., Li, H., Shen, H.: A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. (2021)
HarisMGlowaczALane line detection based on object feature distillationElectronics2021109110210.3390/electronics10091102
Yang, T., Liang, R., Huang, L.: Vehicle counting method based on attention mechanism SSD and state detection. Vis. Comput. 1–11 (2021)
JeppssonHÖstlingMLubbeNReal life safety benefits of increasing brake deceleration in car-to-pedestrian accidents: simulation of vacuum emergency brakingAccid. Anal. Prev.201811131132010.1016/j.aap.2017.12.001
Wang, B., Wang, Z., Zhang, Y.: Polynomial regression network for variable-number lane detection. In: European Conference on Computer Vision, pp. 719–734 (2020)
Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Keep your eyes on the lane: real-time attention-guided lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 294–302 (2021)
Wang, Z., Ren, W., Qiu, Q.: LaneNet: real-time lane detection networks for autonomous driving. arXiv (2018)
He, B., Ai, R., Yan, Y., Lang, X.: Accurate and robust lane detection based on Dual-View Convolutional Neutral Network. In: IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2016, pp. 1041–1046. IEEE. https://doi.org/10.1109/IVS.2016.7535517
Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pp. 276–291 (2020)
Wen-juanGSYZYuan-juanTQZCombining the hough transform and an improved least squares method for line detectionComput. Sci.201244196200
ZhaoweiYUXiaoboWULinSIllumination invariant lane detection algorithm based on dynamic region of interestComput. Eng20174324356
GuillouEMeneveauxDMaiselEBouatouchKUsing vanishing points for camera calibration and coarse 3D reconstruction from a single imageVis. Comput.200016739641010.1007/PL000133941009.68976
DingLXuZZongJXiaoJShuCXuBA lane line detection algorithm based on convolutional neural networkGeom. Vis.2021138617510.1007/978-3-030-72073-5_14
An, F.-P., Liu, J., Bai, L.: Object recognition algorithm based on optimized nonlinear activation function-global convolutional neural network. Vis. Comput. 1–13 (2021)
NCSA.: NCSA Data Resource Website, Fatality Analysis Reporting System (FARS) Encyclopaedia, p. 20. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation (2018). Available: http://www-fars.nhtsa.dot.gov/main/index.aspx
GopalanRHongTShneierMChellappaRA learning approach towards detection and tracking of lane markingsIEEE Trans. Intell. Transp. Syst.20121331088109810.1109/TITS.2012.2184756
ChenGHZhouWWangFJXiaoBJDaiSFLane detection based on improved canny detector and least square fittingAdv. Mater. Res.2013765–76723832387
Liu, L., Chen, X., Zhu, S., Tan, P.: CondLaneNet: a top-to-down lane detection framework based on conditional convolution. arXiv Prepr. arXiv2105.05003 (2021)
Liu, S., Xiong, M., Zhong, W., Xiong, H.: Towards Industrial Scenario Lane Detection: Vision-Based AGV Navigation Methods. In: 2020 IEEE International Conference on Mechatronics and Automation, ICMA, pp. 1101–1106 (2020). https://doi.org/10.1109/ICMA49215.2020.9233837
HeZLiQFengHXuZFast and sub-pixel precision target tracking algorithm for intelligent dual-resolution cameraVis. Comput.20203661157117110.1007/s00371-019-01724-4
Zhao, K., Meuter, M., Nunn, C., Müller, D., Müller-Schneiders, S., Pauli, J.: A novel multi-lane detection and tracking system. In: IEEE Intelligent Vehicles Symposium, pp. 1084–1089 (2012)
Chetlur, S., et al.: cuDNN: Efficient primitives for deep learning. arXiv, Oct. 2014, Accessed: Mar. 05, 2021. [Online]. Available: http://arxiv.org/abs/1410.0759
Singh, K., Seth, A., Sandhu, H.S., Samdani, K.: A comprehensive review of convolutional neural network based image enhancement techniques. In: IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–6 (2019)
JiaBLiuRZhuMReal-time obstacle detection with motion features using monocular visionVis. Comput.201531328129310.1007/s00371-014-0918-5
Haris, M., Hou, J., Wang, X.: Multi-scale spatial convolution algorithm for lane line detection and lane offset estimation in complex road conditions. Signal Process. Image Commun. 116413 (2021)
HarisMGlowaczARoad object detection: a comparative study of deep learning-based algorithmsElectronics20211016193210.3390/ELECTRONICS10161932
SrivastavaSLumbMSingalRLane detection using median filter, wiener filter and integrated hough transformJ. Autom. Control Eng.20153325826410.12720/joace.3.3.258-264
Yoo, S., et al.: End-to-end lane marker detection via row-wise classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1006–1007 (2020)
Lee, H., Kim, S., Park, S., Jeong, Y., Lee, H., Yi, K.: AVM/LiDAR sensor based lane marking detection method for automated driving on complex urban roads. In: IEEE Intelligent Vehicles Symposium (IV), pp. 1434–1439 (2017)
CuiGWangJLiJRobust multilane detection and tracking in urban scenarios based on LIDAR and mono-visionIET Image Process.20148526927910.1049/iet-ipr.2013.0371
Barsan, I.A., Wang, S., Pokrovsky, A., Urtasun, R.: Learning to localize using a lidar intensity map. arXiv Prepr. arXiv2012.10902 (2020)
Bailo, O., Lee, S., Rameau, F., Yoon, J.S., Kweon, I.S.: Robust road marking detection & recognition using density-based grouping & machine learning techniques. In: Proceedings-2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 760–768 (2017). https://doi.org/10.1109/WACV.2017.90
Zhu, J., Shi, F., Li, J.: Advanced driver assistance system based on machine vision. In: IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), vol. 4, pp. 2026–2030 (2021)
Qu, Z., Jin, H., Zhou, Y., Yang, Z., Zhang, W.: Focus on local: detecting lane marker from bottom up via key point. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14122–14130 (2021)
LiYHuangHLiXChenLNighttime lane markings detection based on Canny operator and Hough transformSci. Technol. Eng20161616711815
Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNS by self attention distillation. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 1013–1021. https://doi.org/10.1109/ICCV.2019.00110
GuoJKurupUShahMIs it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous drivingIEEE Trans. Intell. Transp. Syst.20192183135315110.1109/TITS.2019.2926042
HarisMHouJObstacle detection and safely navigate the autonomous v
J Li (2353_CR26) 2016; 28
LC Chen (2353_CR55) 2018; 40
2353_CR42
2353_CR41
2353_CR43
2353_CR40
M Haris (2353_CR39) 2021; 10
YY Ye (2353_CR45) 2018; 12
R Gopalan (2353_CR22) 2012; 13
M Haris (2353_CR13) 2020; 20
H Jeppsson (2353_CR2) 2018; 111
2353_CR53
2353_CR11
2353_CR10
M Haris (2353_CR27) 2021; 10
2353_CR54
Y Li (2353_CR16) 2016; 16
2353_CR19
S Mammar (2353_CR50) 2006; 7
2353_CR57
2353_CR12
2353_CR56
2353_CR15
2353_CR59
2353_CR58
X Wang (2353_CR18) 2017; 3
J-P Tarel (2353_CR52) 2012; 4
2353_CR3
2353_CR5
J Guo (2353_CR51) 2019; 21
TH Chan (2353_CR33) 2015; 24
2353_CR20
2353_CR64
2353_CR63
Z Chen (2353_CR46) 2020; 29
2353_CR66
2353_CR65
2353_CR7
2353_CR60
2353_CR8
2353_CR9
2353_CR62
2353_CR61
2353_CR28
GH Chen (2353_CR49) 2013; 765–767
B Jia (2353_CR37) 2015; 31
GSYZ Wen-juan (2353_CR48) 2012; 4
2353_CR29
2353_CR24
2353_CR23
2353_CR25
G Cui (2353_CR4) 2014; 8
2353_CR31
2353_CR30
2353_CR32
L Ding (2353_CR44) 2021; 1386
S Srivastava (2353_CR47) 2015; 3
N Tran (2353_CR1) 2018
YU Zhaowei (2353_CR17) 2017; 43
2353_CR38
FAN Guotian (2353_CR14) 2021; 18
2353_CR36
E Guillou (2353_CR34) 2000; 16
J Kim (2353_CR21) 2017; 87
Z He (2353_CR6) 2020; 36
JC McCall (2353_CR35) 2006; 7
References_xml – reference: Gurghian, A., Koduri, T., Bailur, S.V., Carey, K.J., Murali, V.N.: DeepLanes: end-to-end lane position estimation using deep neural networks. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45 (2016). https://doi.org/10.1109/CVPRW.2016.12
– reference: WangXLiuYHaiDLane detection method based on double ROI and varied-line-spacing-scanningJ. Command Control201732154159
– reference: Wen-juanGSYZYuan-juanTQZCombining the hough transform and an improved least squares method for line detectionComput. Sci.201244196200
– reference: YeYYHaoXLChenHJLane detection method based on lane structural analysis and CNNsIET Intel. Transport Syst.201812651352010.1049/iet-its.2017.0143
– reference: GuotianFANBoLIQinHANRihuaJGangQURobust lane detection and tracking based on machine visionZTE Commun.20211846977
– reference: HarisMGlowaczARoad object detection: a comparative study of deep learning-based algorithmsElectronics20211016193210.3390/ELECTRONICS10161932
– reference: An, F.-P., Liu, J., Bai, L.: Object recognition algorithm based on optimized nonlinear activation function-global convolutional neural network. Vis. Comput. 1–13 (2021)
– reference: Singh, K., Seth, A., Sandhu, H.S., Samdani, K.: A comprehensive review of convolutional neural network based image enhancement techniques. In: IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–6 (2019)
– reference: KimJKimJJangG-JLeeMFast learning method for convolutional neural networks using extreme learning machine and its application to lane detectionNeural Netw.20178710912110.1016/j.neunet.2016.12.002
– reference: Haris, M., Hou, J., Wang, X.: Multi-scale spatial convolution algorithm for lane line detection and lane offset estimation in complex road conditions. Signal Process. Image Commun. 116413 (2021)
– reference: Liu, S., Xiong, M., Zhong, W., Xiong, H.: Towards Industrial Scenario Lane Detection: Vision-Based AGV Navigation Methods. In: 2020 IEEE International Conference on Mechatronics and Automation, ICMA, pp. 1101–1106 (2020). https://doi.org/10.1109/ICMA49215.2020.9233837
– reference: GuillouEMeneveauxDMaiselEBouatouchKUsing vanishing points for camera calibration and coarse 3D reconstruction from a single imageVis. Comput.200016739641010.1007/PL000133941009.68976
– reference: Zheng, T. et al.: Resa: recurrent feature-shift aggregator for lane detection. arXiv Prepr. arXiv2008.13719 (2020)
– reference: CuiGWangJLiJRobust multilane detection and tracking in urban scenarios based on LIDAR and mono-visionIET Image Process.20148526927910.1049/iet-ipr.2013.0371
– reference: Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pp. 276–291 (2020)
– reference: LiYHuangHLiXChenLNighttime lane markings detection based on Canny operator and Hough transformSci. Technol. Eng20161616711815
– reference: ChanTHJiaKGaoSLuJZengZMaYPCANet: a simple deep learning baseline for image classification?IEEE Trans. Image Process.2015241250175032340609910.1109/TIP.2015.24756251408.94080
– reference: TarelJ-PHautiereNCaraffaLCordAHalmaouiHGruyerDVision enhancement in homogeneous and heterogeneous fogIEEE Intell. Transp. Syst. Mag.20124262010.1109/MITS.2012.2189969
– reference: Li, H.T., Todd, Z., Bielski, N., Carroll, F.: 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation. Vis. Comput. 1–16 (2021)
– reference: Garnett, N., Cohen, R., Pe’Er, T., Lahav, R., Levi, D.: 3D-LaneNet: End-to-end 3D multiple lane detection. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 2921–2930. https://doi.org/10.1109/ICCV.2019.00301
– reference: Lee, H., Kim, S., Park, S., Jeong, Y., Lee, H., Yi, K.: AVM/LiDAR sensor based lane marking detection method for automated driving on complex urban roads. In: IEEE Intelligent Vehicles Symposium (IV), pp. 1434–1439 (2017)
– reference: ChenLCPapandreouGKokkinosIMurphyKYuilleALDeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFsIEEE Trans. Pattern Anal. Mach. Intell.201840483484810.1109/TPAMI.2017.2699184
– reference: Yoo, S., et al.: End-to-end lane marker detection via row-wise classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1006–1007 (2020)
– reference: GopalanRHongTShneierMChellappaRA learning approach towards detection and tracking of lane markingsIEEE Trans. Intell. Transp. Syst.20121331088109810.1109/TITS.2012.2184756
– reference: LiJMeiXProkhorovDTaoDDeep neural network for structural prediction and lane detection in traffic sceneIEEE Trans. Neural Netw. Learn. Syst.201628369070310.1109/TNNLS.2016.2522428
– reference: Kumawat, A. Panda, S.: A robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL). Vis. Comput. 1–22 (2021)
– reference: Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 7276–7283 (2018)
– reference: ZhaoweiYUXiaoboWULinSIllumination invariant lane detection algorithm based on dynamic region of interestComput. Eng20174324356
– reference: JiaBLiuRZhuMReal-time obstacle detection with motion features using monocular visionVis. Comput.201531328129310.1007/s00371-014-0918-5
– reference: Liang, D., et al.: “LineNet: a zoomable CNN for crowdsourced high definition maps modeling in urban environments. arXiv (2018)
– reference: Liu, Y.-B., Zeng, M., Meng, Q.-H.: Heatmap-based vanishing point boosts lane detection. arXiv Prepr. arXiv2007.15602 (2020)
– reference: Qu, Z., Jin, H., Zhou, Y., Yang, Z., Zhang, W.: Focus on local: detecting lane marker from bottom up via key point. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14122–14130 (2021)
– reference: Xiong, Y., et al.: “Upsnet: a unified panoptic segmentation network. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019, pp. 8810–8818. https://doi.org/10.1109/CVPR.2019.00902
– reference: TranNGlobal Status Report on Road Safety2018GenevaWorld Health Organization511
– reference: Barsan, I.A., Wang, S., Pokrovsky, A., Urtasun, R.: Learning to localize using a lidar intensity map. arXiv Prepr. arXiv2012.10902 (2020)
– reference: HarisMHouJObstacle detection and safely navigate the autonomous vehicle from unexpected obstacles on the driving laneSensors (Switzerland)2020201712210.3390/s20174719
– reference: McCallJCTrivediMMVideo-based lane estimation and tracking for driver assistance: survey, system, and evaluationIEEE Trans. Intell. Transp. Syst.200671203710.1109/TITS.2006.869595
– reference: Zhu, J., Shi, F., Li, J.: Advanced driver assistance system based on machine vision. In: IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), vol. 4, pp. 2026–2030 (2021)
– reference: Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. Arxiv, 2016, [Online]. Available: http://arxiv.org/abs/1603.04467
– reference: Gao, Q., Feng, Y., Wang, L.: A real-time lane detection and tracking algorithm. In: IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 1230–1234 (2017)
– reference: Chetlur, S., et al.: cuDNN: Efficient primitives for deep learning. arXiv, Oct. 2014, Accessed: Mar. 05, 2021. [Online]. Available: http://arxiv.org/abs/1410.0759
– reference: He, B., Ai, R., Yan, Y., Lang, X.: Accurate and robust lane detection based on Dual-View Convolutional Neutral Network. In: IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2016, pp. 1041–1046. IEEE. https://doi.org/10.1109/IVS.2016.7535517
– reference: HeZLiQFengHXuZFast and sub-pixel precision target tracking algorithm for intelligent dual-resolution cameraVis. Comput.20203661157117110.1007/s00371-019-01724-4
– reference: Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Keep your eyes on the lane: real-time attention-guided lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 294–302 (2021)
– reference: JeppssonHÖstlingMLubbeNReal life safety benefits of increasing brake deceleration in car-to-pedestrian accidents: simulation of vacuum emergency brakingAccid. Anal. Prev.201811131132010.1016/j.aap.2017.12.001
– reference: Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNS by self attention distillation. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 1013–1021. https://doi.org/10.1109/ICCV.2019.00110
– reference: GuoJKurupUShahMIs it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous drivingIEEE Trans. Intell. Transp. Syst.20192183135315110.1109/TITS.2019.2926042
– reference: ChenGHZhouWWangFJXiaoBJDaiSFLane detection based on improved canny detector and least square fittingAdv. Mater. Res.2013765–76723832387
– reference: Bailo, O., Lee, S., Rameau, F., Yoon, J.S., Kweon, I.S.: Robust road marking detection & recognition using density-based grouping & machine learning techniques. In: Proceedings-2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 760–768 (2017). https://doi.org/10.1109/WACV.2017.90
– reference: Liu, L., Chen, X., Zhu, S., Tan, P.: CondLaneNet: a top-to-down lane detection framework based on conditional convolution. arXiv Prepr. arXiv2105.05003 (2021)
– reference: DingLXuZZongJXiaoJShuCXuBA lane line detection algorithm based on convolutional neural networkGeom. Vis.2021138617510.1007/978-3-030-72073-5_14
– reference: Choi, J., Chun, D., Kim, H., Lee, H.J.: Gaussian YOLOv3: an accurate and fast object detector using localization uncertainty for autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 502–511 (2019). https://doi.org/10.1109/ICCV.2019.00059
– reference: Kim, J., Lee, M.: Robust lane detection based on convolutional neural network and random sample consensus. Lecture Notes Computer Science (including Subseries in Lecture Notes Artificial Intelligence, Lecture Notes Bioinformatics), vol. 8834, pp. 454–461 (2014). https://doi.org/10.1007/978-3-319-12637-1_57
– reference: Wang, B., Wang, Z., Zhang, Y.: Polynomial regression network for variable-number lane detection. In: European Conference on Computer Vision, pp. 719–734 (2020)
– reference: Yang, T., Liang, R., Huang, L.: Vehicle counting method based on attention mechanism SSD and state detection. Vis. Comput. 1–11 (2021)
– reference: Wang, Z., Ren, W., Qiu, Q.: LaneNet: real-time lane detection networks for autonomous driving. arXiv (2018)
– reference: Ko, Y., Lee, Y., Azam, S., Munir, F., Jeon, M., Pedrycz, W.: Key points estimation and point instance segmentation approach for lane detection. IEEE Trans. Intell. Transp. Syst. (2021)
– reference: NCSA.: NCSA Data Resource Website, Fatality Analysis Reporting System (FARS) Encyclopaedia, p. 20. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation (2018). Available: http://www-fars.nhtsa.dot.gov/main/index.aspx
– reference: Zhao, K., Meuter, M., Nunn, C., Müller, D., Müller-Schneiders, S., Pauli, J.: A novel multi-lane detection and tracking system. In: IEEE Intelligent Vehicles Symposium, pp. 1084–1089 (2012)
– reference: MammarSGlaserSNettoMTime to line crossing for lane departure avoidance: a theoretical study and an experimental settingIEEE Trans. Intell. Transp. Syst.20067222624110.1109/TITS.2006.874707
– reference: HarisMGlowaczALane line detection based on object feature distillationElectronics2021109110210.3390/electronics10091102
– reference: ChenZShiJLiWLearned fast HEVC intra codingIEEE Trans. Image Process.2020295431544610.1109/TIP.2020.298283207586260
– reference: SrivastavaSLumbMSingalRLane detection using median filter, wiener filter and integrated hough transformJ. Autom. Control Eng.20153325826410.12720/joace.3.3.258-264
– reference: Li, X., He, M., Li, H., Shen, H.: A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. (2021)
– reference: Su, J., Chen, C., Zhang, K., Luo, J., Wei, X., Wei, X.: Structure guided lane detection. arXiv Prepr. arXiv2105.05403 (2021)
– reference: Xu, H., Wang, S., Cai, X., Zhang, W., Liang, X., Li, Z.: Curvelane-nas: unifying lane-sensitive architecture search and adaptive point blending. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, pp. 689–704 (2020)
– volume: 3
  start-page: 258
  issue: 3
  year: 2015
  ident: 2353_CR47
  publication-title: J. Autom. Control Eng.
  doi: 10.12720/joace.3.3.258-264
– ident: 2353_CR8
  doi: 10.1109/IMCEC51613.2021.9482067
– ident: 2353_CR10
  doi: 10.1016/j.image.2021.116413
– ident: 2353_CR5
  doi: 10.1007/s00371-021-02103-8
– volume: 31
  start-page: 281
  issue: 3
  year: 2015
  ident: 2353_CR37
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-014-0918-5
– ident: 2353_CR65
  doi: 10.1109/CVPR46437.2021.01390
– volume: 43
  start-page: 43
  issue: 2
  year: 2017
  ident: 2353_CR17
  publication-title: Comput. Eng
– volume: 20
  start-page: 1
  issue: 17
  year: 2020
  ident: 2353_CR13
  publication-title: Sensors (Switzerland)
  doi: 10.3390/s20174719
– ident: 2353_CR54
– ident: 2353_CR58
  doi: 10.1109/CVPRW50498.2020.00511
– ident: 2353_CR61
  doi: 10.1007/978-3-030-58523-5_42
– volume: 111
  start-page: 311
  year: 2018
  ident: 2353_CR2
  publication-title: Accid. Anal. Prev.
  doi: 10.1016/j.aap.2017.12.001
– ident: 2353_CR28
  doi: 10.1007/s00371-021-02161-y
– ident: 2353_CR66
  doi: 10.1109/ICCV48922.2021.00375
– ident: 2353_CR43
  doi: 10.1109/ICCV.2019.00301
– ident: 2353_CR63
  doi: 10.1109/CVPR46437.2021.00036
– start-page: 5
  volume-title: Global Status Report on Road Safety
  year: 2018
  ident: 2353_CR1
– volume: 29
  start-page: 5431
  year: 2020
  ident: 2353_CR46
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2982832
– volume: 10
  start-page: 1932
  issue: 16
  year: 2021
  ident: 2353_CR27
  publication-title: Electronics
  doi: 10.3390/ELECTRONICS10161932
– volume: 1386
  start-page: 175
  year: 2021
  ident: 2353_CR44
  publication-title: Geom. Vis.
  doi: 10.1007/978-3-030-72073-5_14
– volume: 13
  start-page: 1088
  issue: 3
  year: 2012
  ident: 2353_CR22
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2012.2184756
– ident: 2353_CR42
  doi: 10.1109/CVPR.2019.00902
– volume: 40
  start-page: 834
  issue: 4
  year: 2018
  ident: 2353_CR55
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2699184
– volume: 765–767
  start-page: 2383
  year: 2013
  ident: 2353_CR49
  publication-title: Adv. Mater. Res.
– volume: 4
  start-page: 196
  issue: 4
  year: 2012
  ident: 2353_CR48
  publication-title: Comput. Sci.
– ident: 2353_CR31
  doi: 10.1109/WACV.2017.90
– ident: 2353_CR32
  doi: 10.1109/CVPRW.2016.12
– volume: 12
  start-page: 513
  issue: 6
  year: 2018
  ident: 2353_CR45
  publication-title: IET Intel. Transport Syst.
  doi: 10.1049/iet-its.2017.0143
– ident: 2353_CR53
– ident: 2353_CR19
– ident: 2353_CR30
  doi: 10.1109/ICMA49215.2020.9233837
– ident: 2353_CR40
  doi: 10.1109/ICoIAS.2018.8494031
– volume: 36
  start-page: 1157
  issue: 6
  year: 2020
  ident: 2353_CR6
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-019-01724-4
– volume: 4
  start-page: 6
  issue: 2
  year: 2012
  ident: 2353_CR52
  publication-title: IEEE Intell. Transp. Syst. Mag.
  doi: 10.1109/MITS.2012.2189969
– ident: 2353_CR24
  doi: 10.1007/s00371-021-02196-1
– volume: 16
  start-page: 1671
  year: 2016
  ident: 2353_CR16
  publication-title: Sci. Technol. Eng
– ident: 2353_CR15
  doi: 10.1109/IVS.2012.6232168
– ident: 2353_CR64
  doi: 10.24963/ijcai.2021/138
– ident: 2353_CR25
  doi: 10.1109/IVS.2016.7535517
– ident: 2353_CR20
  doi: 10.1109/IVS.2017.7995911
– ident: 2353_CR11
  doi: 10.1109/ICSCAN.2019.8878706
– ident: 2353_CR60
  doi: 10.1007/978-3-030-58555-6_41
– ident: 2353_CR3
– volume: 3
  start-page: 154
  issue: 2
  year: 2017
  ident: 2353_CR18
  publication-title: J. Command Control
– ident: 2353_CR57
  doi: 10.1007/978-3-030-58586-0_17
– ident: 2353_CR59
  doi: 10.1109/TITS.2021.3088488
– ident: 2353_CR12
  doi: 10.1109/LGRS.2021.3098774
– volume: 10
  start-page: 1102
  issue: 9
  year: 2021
  ident: 2353_CR39
  publication-title: Electronics
  doi: 10.3390/electronics10091102
– ident: 2353_CR36
  doi: 10.1609/aaai.v32i1.12301
– ident: 2353_CR56
– ident: 2353_CR41
– ident: 2353_CR62
– volume: 18
  start-page: 69
  issue: 4
  year: 2021
  ident: 2353_CR14
  publication-title: ZTE Commun.
– ident: 2353_CR38
  doi: 10.1109/ICCV.2019.00110
– volume: 21
  start-page: 3135
  issue: 8
  year: 2019
  ident: 2353_CR51
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2019.2926042
– ident: 2353_CR7
  doi: 10.1109/ITNEC.2017.8284972
– volume: 16
  start-page: 396
  issue: 7
  year: 2000
  ident: 2353_CR34
  publication-title: Vis. Comput.
  doi: 10.1007/PL00013394
– volume: 28
  start-page: 690
  issue: 3
  year: 2016
  ident: 2353_CR26
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2016.2522428
– ident: 2353_CR29
  doi: 10.1109/ICCV.2019.00059
– volume: 7
  start-page: 226
  issue: 2
  year: 2006
  ident: 2353_CR50
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2006.874707
– ident: 2353_CR9
  doi: 10.1007/s00371-020-02033-x
– volume: 8
  start-page: 269
  issue: 5
  year: 2014
  ident: 2353_CR4
  publication-title: IET Image Process.
  doi: 10.1049/iet-ipr.2013.0371
– volume: 7
  start-page: 20
  issue: 1
  year: 2006
  ident: 2353_CR35
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2006.869595
– ident: 2353_CR23
  doi: 10.1007/978-3-319-12637-1_57
– volume: 87
  start-page: 109
  year: 2017
  ident: 2353_CR21
  publication-title: Neural Netw.
  doi: 10.1016/j.neunet.2016.12.002
– volume: 24
  start-page: 5017
  issue: 12
  year: 2015
  ident: 2353_CR33
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2015.2475625
SSID ssj0017749
Score 2.4019063
Snippet Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 519
SubjectTerms Algorithms
Artificial Intelligence
Artificial neural networks
Asymmetry
Automobile safety
Cameras
Classification
Computer Graphics
Computer Science
Deep learning
Driving conditions
Estimates
Human error
Image Processing and Computer Vision
Image segmentation
Machine learning
Neural networks
Object recognition
Original Article
Parameter modification
Pavement deterioration
Real time
Roads & highways
Sensors
Teaching methods
Traffic
Traffic accidents & safety
SummonAdditionalLinks – databaseName: SpringerLINK Contemporary 1997-Present
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1dT9swFL1isIfxsPIxREeH_LA3ZilxnDh5RIhqDwhNsE28Rf6Eam1apWFa_z3XblLKBJPYUxT5Q5F9bZ_I554D8JmbOM2F41TFqaHcZhHNC-VoKhlulybXkTPBbEJcXuY3N8W3Nils3rHduyvJsFOvkt2Cuhz1lAKv0ZLQ7A1sefES71twdf1zdXeAgCaA3hj_j3yeZ5sq83wfT4-jR4z517VoOG2Gvf_7zh1436JLcroMh13YsNUe9DrnBtIu5D3YXpMh3IfFhaws8XiTGNsEblZFZGXwbYaRdV9b4rU4lkmOZIRlJDDR7R-ylidH1IJ4Fv0tNiVyvphMvFmXJr9sXdkx8fT2NsyJHN9O61FzN_kAP4bn38--0taSgWpcqw1lVkqpuEgSnYjEIDRnTHEZq9wKlzh8Mi5xfy0irpjLDZPCCO2wkLnCIZg8gM1qWtlDIKpwkTZcZ0Lj73maSoRCiomoUIXVPDZ9iLuZKXWrV-5tM8blSmk5jHSJI12GkS6zPpys2syWah3_rD3oJrxsV-68ZAViniTNk7gPX7oJfix-ubePr6t-BO-8c_2SAD6Azaa-t5_grf7djOb1cYjoB92N8Po
  priority: 102
  providerName: Springer Nature
Title Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm
URI https://link.springer.com/article/10.1007/s00371-021-02353-6
https://www.proquest.com/docview/2918135831
Volume 39
WOSCitedRecordID wos000741634600004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1432-2315
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017749
  issn: 0178-2789
  databaseCode: P5Z
  dateStart: 19970201
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Computer Science Database
  customDbUrl:
  eissn: 1432-2315
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017749
  issn: 0178-2789
  databaseCode: K7-
  dateStart: 19970201
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/compscijour
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1432-2315
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017749
  issn: 0178-2789
  databaseCode: BENPR
  dateStart: 19970201
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1432-2315
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017749
  issn: 0178-2789
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LTxsxEB7x6KE9AC1FpIXIB26txa7XG--eUEGgSqAoogWhXlZ-AiJZ0mSpyL9n7DgJrVQuXHa1mrVlacbjz_bMfAB73KR5IRynKs0N5baT0KJUjuaSobs0hU6cCWQTotstrq7KXjxwG8ewyplPDI7a3Gt_Rr7PSlyLsrzI0oPhb-pZo_ztaqTQWIbVlCHWR3vu5b_mtwgIbQL8TXGn5DM-Y9JMSJ0LteqoD1DwFV8y2vl7YVqgzX8uSMO6c7L-2hFvwFpEnOTb1ETew5KtP8C7Z3UIN2FyJmtLPOAkxjYhOKsmsjb4NUTTehhZ4otxTLMcyS3KSAhFt4_kWaIcURPiw-ivsSmR48lg4Nm6NLmzo9r2iY9vj3ZOZP8aR9rcDD7Cxcnxz6PvNHIyUI2TtaHMSikVF1mmM5EZxOaMKS5TVVjhModvxiU62DLhirnCMCmM0A6FzJUO0eQWrNT3td0GokqXaMN1R2jcn-e5RCykmEhKVVrNU9OCdKaQSseC5Z43o1_NSy0HJVaoxCooseq04Mu8zXBaruPFv3dmmqvi1B1XC7W14OtM9wvx_3v79HJvn-Gtp6qfRnzvwEozerC78Eb_aW7HozasHh53e-dtWD4VtB3MGJ_nPy6fAJV9-HM
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB1VbSXgAOVLLC3FBziBReI46_iAEIJWrbqseihSxSX4s1Tspstu2rJ_it_I2El2CxK99cApihxbivPmeRzPvAF4wW2aF8JzqtPcUu76CS2k9jRXDOnSFibxNhabEMNhcXwsD1fgV5cLE8IqO06MRG3PTPhH_oZJXIuyvMjSd5MfNFSNCqerXQmNBhYHbn6JW7bZ2_2P-H1fMra7c_Rhj7ZVBahBuNWUOaWU5iLLTCYyi94lY5qrVBdO-MzjlXGFFCETrpkvLFPCCuOxkXnpZRA6QMpfQzdCBiI4zL8sTi3QlYrudoo7s5Bh2ibpxFS9qI1HQ0BEUJjJaP_PhXDp3f51IBvXud17_9sMbcDd1qMm7xsTuA8rrnoAd67oLD6E-UBVjgSHmlhXx-CziqjK4t0ETed86kgQG2myOMkptpEYau9-kiuJgETPSUgTOMGuRM3m43GoRmbIdzet3IiE-P3WjokaneDM1N_Gj-Dzjbz7Y1itzir3BIiWPjGWm74wApkxV-jraSYSqaUzPLU9SDsAlKYVZA91QUblQko6gqZE0JQRNGW_B68WfSaNHMm1T291SClbapqVS5j04HWHtWXzv0d7ev1oz-HW3tGnQTnYHx5swm2GvZro9i1Yrafn7hmsm4v6dDbdjkZD4OtNY_A3FABSww
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1db9MwFL2CgiZ4YGMDrTA2P-yNWUscp04eEVCBNlWTYGhvkT-7ijar0myi_55rJ2kLAqSJpyiyHUX2tX0sn3MuwDE3cZoJx6mKU0O5HUQ0y5WjqWS4XJpMR86EZBNiNMqurvKLDRV_YLt3V5KNpsG7NJX16dy405XwLTjNUU8v8H4tCR08hEfcu6H58_qXb6t7BAQ3AQDHeFbyms9WNvPnb_y6Na3x5m9XpGHnGW7__z_vwLMWdZJ3TZg8hwe23IXtLqMDaSf4LjzdsCfcg-W5LC3xOJQYWwfOVklkafBtjhF3W1niPToa8SOZYBkJDHX7g2zo54haEs-uH2NTIhfL2cwn8dLku61KOyWe9t6GP5HT8U01qa9nL-By-PHr-0-0TdVANc7hmjIrpVRcJIlORGIQsjOmuIxVZoVLHD4Zl7ju5hFXzGWGSWGEdljIXO4QZL6EXnlT2n0gKneRNlwPhMZje5pKhEiKiShXudU8Nn2Iu1EqdOtj7tNpTIuVA3Po6QJ7ugg9XQz68HbVZt64ePyz9kE3-EU7oxcFyxELJWmWxH046QZ7Xfz3r726X_Uj2Lr4MCzOP4_OXsMTn9y-4YgfQK-ubu0beKzv6smiOgyB_hOEn_zC
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lane+line+detection+and+departure+estimation+in+a+complex+environment+by+using+an+asymmetric+kernel+convolution+algorithm&rft.jtitle=The+Visual+computer&rft.au=Haris%2C+Malik&rft.au=Hou%2C+Jin&rft.au=Wang%2C+Xiaomin&rft.date=2023-02-01&rft.pub=Springer+Nature+B.V&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=39&rft.issue=2&rft.spage=519&rft.epage=538&rft_id=info:doi/10.1007%2Fs00371-021-02353-6
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon